1165 Commits

Author SHA1 Message Date
Luke Parker
ce3b90541e Make transactions undroppable
coordinator/cosign/src/delay.rs literally demonstrates how we'd need to rewrite
our handling of transactions with this change. It can be cleaned up a bit but
already identifies ergonomic issues. It also doesn't model passing an &mut txn
to an async function, which would also require using the droppable wrapper
struct.

To locally see this build, run

RUSTFLAGS="-Zpanic_abort_tests -C panic=abort" cargo +nightly build -p serai-cosign --all-targets

To locally see this fail to build, run

cargo build -p serai-cosign --all-targets

While it doesn't say which line causes it fail to build, the only distinction
is panic=unwind.

For more context, please see #578.
2025-01-15 03:56:59 -05:00
Luke Parker
cb410cc4e0 Correct how we handle rounding errors within the penalty fn
We explicitly no longer slash stakes but we still set the maximum slash to the
allocated stake + the rewards. Now, the reward slash is bound to the rewards
and the stake slash is bound to the stake. This prevents an improperly rounded
reward slash from effecting a stake slash.
2025-01-15 02:46:31 -05:00
Luke Parker
6c145a5ec3 Disable offline, disruptive slashes
Reasoning commented in codebase
2025-01-14 11:44:13 -05:00
Luke Parker
a7fef2ba7a Redesign Slash/SlashReport types with a function to calculate the penalty 2025-01-14 07:51:39 -05:00
Luke Parker
291ebf5e24 Have serai-task warnings print with the name of the task 2025-01-14 02:52:26 -05:00
Luke Parker
5e0e91c85d Add tasks to publish data onto Serai 2025-01-14 01:58:26 -05:00
Luke Parker
b5a6b0693e Add a proper error type to ContinuallyRan
This isn't necessary. Because we just log the error, we never match off of it,
we don't need any structure beyond String (or now Debug, which still gives us
a way to print the error). This is for the ergonomics of not having to
constantly write `.map_err(|e| format!("{e:?}"))`.
2025-01-12 18:29:08 -05:00
Luke Parker
3cc2abfedc Add a task to publish slash reports 2025-01-12 17:47:48 -05:00
Luke Parker
0ce9aad9b2 Add flow to add transactions onto Tributaries 2025-01-12 07:32:45 -05:00
Luke Parker
e35aa04afb Start handling messages from the processor
Does route ProcessorMessage::CosignedBlock. Rest are stubbed with TODO.
2025-01-12 06:07:55 -05:00
Luke Parker
e7de5125a2 Have processor-messages use CosignIntent/SignedCosign, not the historic cosign format
Has yet to update the processor accordingly.
2025-01-12 05:52:33 -05:00
Luke Parker
158140c3a7 Add a proper error for intake_cosign 2025-01-12 05:49:17 -05:00
Luke Parker
df9a9adaa8 Remove direct dependencies of void, async-trait 2025-01-12 03:48:43 -05:00
Luke Parker
d854807edd Make message_queue::client::Client::send fallible
Allows tasks to report the errors themselves and handle retry in our
standardized way.
2025-01-11 21:57:58 -05:00
Luke Parker
f501d46d44 Correct disabling of Nagle's algorithm 2025-01-11 06:54:43 -05:00
Luke Parker
74106b025f Publish SlashReport onto the Tributary 2025-01-11 06:51:55 -05:00
Luke Parker
e731b546ab Update documentation 2025-01-11 05:13:43 -05:00
Luke Parker
77d60660d2 Move spawn_cosign from main.rs into tributary.rs
Also refines the tasks within tributary.rs a good bit.
2025-01-11 05:12:56 -05:00
Luke Parker
3c664ff05f Re-arrange coordinator/
coordinator/tributary was tributary-chain. This crate has been renamed
tributary-sdk and moved to coordinator/tributary-sdk.

coordinator/src/tributary was our instantion of a Tributary, the Transaction
type and scan task. This has been moved to coordinator/tributary.

The main reason for this was due to coordinator/main.rs becoming untidy. There
is now a collection of clean, independent APIs present in the codebase.
coordinator/main.rs is to compose them. Sometimes, these compositions are a bit
silly (reading from a channel just to forward the message to a distinct
channel). That's more than fine as the code is still readable and the value
from the cleanliness of the APIs composed far exceeds the nits from having
these odd compositions.

This breaks down a bit as we now define a global database, and have some APIs
interact with multiple other APIs.

coordinator/src/tributary was a self-contained, clean API. The recently added
task present in coordinator/tributary/mod.rs, which bound it to the rest of the
Coordinator, wasn't.

Now, coordinator/src is solely the API compositions, and all self-contained
APIs are their own crates.
2025-01-11 04:14:21 -05:00
Luke Parker
c05b0c9eba Handle Canonical, NewSet from serai-coordinator-substrate 2025-01-11 03:07:15 -05:00
Luke Parker
6d5049cab2 Move the task providing transactions onto the Tributary to the Tributary module
Slims down the main file a bit
2025-01-11 02:13:23 -05:00
Luke Parker
1419ba570a Route from tributary scanner to message-queue 2025-01-11 01:55:36 -05:00
Luke Parker
542bf2170a Provide Cosign/CosignIntent for Tributaries 2025-01-11 01:31:28 -05:00
Luke Parker
378d6b90cf Delete old Tributaries on reboot 2025-01-10 20:10:05 -05:00
Luke Parker
cbe83956aa Flesh out Coordinator main
Lot of TODOs as the APIs are all being routed together.
2025-01-10 02:24:24 -05:00
Luke Parker
091d485fd8 Have the Tributary scanner DB be distinct from the cosign DB
Allows deleting the entire Tributary scanner DB upon retiry.
2025-01-10 02:22:58 -05:00
Luke Parker
2a3eaf4d7e Wrap the entire Libp2p object in an Arc
Makes `Clone` calls significantly cheaper as now only the outer Arc is cloned
(the inner ones have been removed). Also wraps uses of Serai in an Arc as we
shouldn't actually need/want multiple caller connection pools.
2025-01-10 01:26:07 -05:00
Luke Parker
23122712cb Document validator jailing upon participation failures and slash report determination
These are TODOs. I just wanted to ensure this was written down and each seemed
too small for GH issues.
2025-01-09 19:50:39 -05:00
Luke Parker
47eb793ce9 Slash upon Tendermint evidence
Decoding slash evidence requires specifying the instantiated generic
`TendermintNetwork`. While irrelevant, that generic includes a type satisfying
`tributary::P2p`. It was only possible to route now that we've redone the P2P
API.
2025-01-09 06:58:00 -05:00
Luke Parker
9b0b5fd1e2 Have serai-cosign index finalized blocks' numbers 2025-01-09 06:57:26 -05:00
Luke Parker
893a24a1cc Better document bounds in serai-coordinator-p2p 2025-01-09 06:57:12 -05:00
Luke Parker
b101e2211a Complete serai-coordinator-p2p 2025-01-09 06:23:14 -05:00
Luke Parker
201a444e89 Remove tokio dependency from serai-coordinator-p2p
Re-implements tokio::mpsc::oneshot with a thin wrapper around async-channel.

Also replaces futures-util with futures-lite.
2025-01-09 02:16:05 -05:00
Luke Parker
9833911e06 Promote Request::Heartbeat from an enum variant to a struct 2025-01-09 01:41:42 -05:00
Luke Parker
465e8498c4 Make the coordinator's P2P modules their own crates 2025-01-09 01:26:25 -05:00
Luke Parker
adf20773ac Add libp2p module documentation 2025-01-09 00:40:07 -05:00
Luke Parker
295c1bd044 Document improper handling of session rotation in P2P allow list 2025-01-09 00:16:45 -05:00
Luke Parker
dda6e3e899 Limit each peer to one connection
Prevents dialing the same peer multiple times (successfully).
2025-01-09 00:06:51 -05:00
Luke Parker
75a00f2a1a Add allow_block_list to libp2p
The check in validators prevented connections from non-validators.
Non-validators could still participate in the network if they laundered their
connection through a malicious validator. allow_block_list ensures that peers,
not connections, are explicitly limited to validators.
2025-01-08 23:54:27 -05:00
Luke Parker
6cde2bb6ef Correct and document topic subscription 2025-01-08 23:16:04 -05:00
Luke Parker
20326bba73 Replace KeepAlive with ping
This is more standard and allows measuring latency.
2025-01-08 23:01:36 -05:00
Luke Parker
ce83b41712 Finish mapping Libp2p to the P2p trait API 2025-01-08 19:39:09 -05:00
Luke Parker
b2bd5d3a44 Remove Debug bound on tributary::P2p 2025-01-08 17:40:32 -05:00
Luke Parker
de2d6568a4 Actually implement the Peer abstraction for Libp2p 2025-01-08 17:40:08 -05:00
Luke Parker
fd9b464b35 Add a trait for the P2p network used in the coordinator
Moves all of the Libp2p code to a dedicated directory. Makes the Heartbeat task
abstract over any P2p network.
2025-01-08 17:01:37 -05:00
Luke Parker
376a66b000 Remove async-trait from tendermint-machine, tributary-chain 2025-01-08 16:41:11 -05:00
Luke Parker
2121a9b131 Spawn the task to select validators to dial 2025-01-07 18:17:36 -05:00
Luke Parker
419223c54e Build the swarm
Moves UpdateSharedValidatorsTask to validators.rs. While prior planned to
re-use a validators object across connecting and peer state management, the
current plan is to use an independent validators object for each to minimize
any contention. They should be built infrequently enough, and cheap enough to
update in the majority case (due to quickly checking if an update is needed),
that this is fine.
2025-01-07 18:09:25 -05:00
Luke Parker
a731c0005d Finish routing our own channel abstraction around the Swarm event stream 2025-01-07 16:51:56 -05:00
Luke Parker
f27e4e3202 Move the WIP SwarmTask to its own file 2025-01-07 16:34:19 -05:00
Luke Parker
f55165e016 Add channels to send requests/recv responses 2025-01-07 15:51:15 -05:00
Luke Parker
d9e9887d34 Run the dial task whenever we have a peer disconnect 2025-01-07 15:36:42 -05:00
Luke Parker
82e753db30 Document risk of eclipse in the dial task 2025-01-07 15:35:34 -05:00
Luke Parker
052388285b Remove TaskHandle::close
TaskHandle::close meant run_now may panic if the task was closed. Now, tasks
are only closed when all handles are dropped, causing all handles to point to
running tasks (ensuring run_now won't panic).
2025-01-07 15:26:41 -05:00
Luke Parker
47a4e534ef Update serai-processor-signers to VariantSignid::Batch([u8; 32]) 2025-01-07 15:26:23 -05:00
Luke Parker
257f691277 Start filling out message handling in SwarmTask 2025-01-05 01:23:28 -05:00
Luke Parker
c6d0fb477c Inline noise into OnlyValidators
libp2p does support (noise, OnlyValidators) but it'll interpret it as either,
not a chain. This will act as the desired chain.
2025-01-05 00:55:25 -05:00
Luke Parker
96518500b1 Don't hold the shared Validators write lock while making requests to Serai 2025-01-05 00:29:11 -05:00
Luke Parker
2b8f481364 Parallelize requests within Validators::update 2025-01-05 00:17:05 -05:00
Luke Parker
479ca0410a Add commentary on the use of FuturesOrdered 2025-01-04 23:28:54 -05:00
Luke Parker
9a5a661d04 Start on the task to manage the swarm 2025-01-04 23:28:29 -05:00
Luke Parker
3daeea09e6 Only let active Serai validators connect over P2P 2025-01-04 22:21:23 -05:00
Luke Parker
a64e2004ab Dial new peers when we don't have the target amount 2025-01-04 18:04:24 -05:00
Luke Parker
f9f6d40695 Use Serai validator keys as PeerIds 2025-01-04 18:03:37 -05:00
Luke Parker
4836c1676b Don't consider the Serai set in the cosigning protocol
The Serai set SHOULD be banned from setting keys so this SHOULD be unreachable.
It's now explicitly unreachable.
2025-01-04 13:52:17 -05:00
Luke Parker
985261574c Add gossip behavior back to the coordinator 2025-01-03 14:00:20 -05:00
Luke Parker
3f3b0255f8 Tweak heartbeat task to run less often if there's no progress to be made 2025-01-03 13:59:14 -05:00
Luke Parker
5fc8500f8d Add task to heartbeat a tributary to the P2P code 2025-01-03 13:04:27 -05:00
Luke Parker
49c221cca2 Restore request-response code to the coordinator 2025-01-03 13:02:50 -05:00
Luke Parker
906e2fb669 Start cosigning on Cosign or Cosigned, not just on Cosigned 2025-01-03 10:30:39 -05:00
Luke Parker
ce676efb1f cargo update 2025-01-03 07:01:06 -05:00
Luke Parker
0a611cb155 Further flesh out tributary scanning
Renames `label` to `round` since `Label` was renamed to `SigningProtocolRound`.

Adds some more context-less validation to transactions which used to be done
within the custom decode function which was simplified via the usage of borsh.

Documents in processor-messages where the Coordinator sends each of its
messages.
2025-01-03 06:57:28 -05:00
Luke Parker
bcd3f14f4f Start work on cleaning up the coordinator's tributary handling 2025-01-02 09:11:04 -05:00
Luke Parker
6272c40561 Restore block_hash to Batch
It's not only helpful (to easily check where Serai's view of the external
network is) but it's necessary in case of a non-trivial chain fork to determine
which blockchain Serai considers canonical.
2024-12-31 18:10:47 -05:00
Luke Parker
2240a50a0c Rebroadcast cosigns for the currently evaluated session, not the latest intended
If Substrate has a block 500 with a key gen, and a block 600 with a key gen,
and the session starting on 500 never cosigns everything, everyone up-to-date
will want the cosigns for the session starting on block 500. Everyone
up-to-date will also be rebroadcasting the non-existent cosigns for the session
which has yet to start. This wouldn't cause a stall as eventually, each
individual set would cosign the latest notable block, and then that would be
explicitly synced, but it's still not the intended behavior.

We also won't even intake the cosigns for the latest intended session if it
exceeds the session we're currently evaluating. This does mean those behind on
the cosigning protocol wouldn't have rebroadcasted their historical cosigns,
and now will, but that's valuable as we don't actually know if we're behind or
up-to-date (per above posited issue).
2024-12-31 17:17:12 -05:00
Luke Parker
7e2b31e5da Clean the transaction definitions in the coordinator
Moves to borsh for serialization. No longer includes nonces anywhere in the TX.
2024-12-31 12:14:32 -05:00
Luke Parker
8c9441a1a5 Redo coordinator's Substrate scanner 2024-12-31 10:37:19 -05:00
Luke Parker
5a42f66dc2 alloy 0.9 2024-12-30 11:09:09 -05:00
Luke Parker
b584a2beab Remove old DB entry from the scanner
We read from it but never writ to it.

It was used to check we didn't flag a block as notable after reporting it, but
it was called by the scan task as it scanned a block. We only batch/report
blocks after the scan task after scanning them, so it was very redundant.
2024-12-30 11:07:05 -05:00
Luke Parker
26ccff25a1 Split reporting Batches to the signer from the Batch test 2024-12-30 11:03:52 -05:00
Luke Parker
f0094b3c7c Rename Report task to Batch task 2024-12-30 10:49:35 -05:00
Luke Parker
458f4fe170 Move where we check if we should delay reporting of Batches 2024-12-30 10:18:38 -05:00
Luke Parker
1de8136739 Remove Session from VariantSignId::SlashReport
It's only there to make the VariantSignid unique across Sessions. By localizing
the VariantSignid to a Session, we avoid this, and can better ensure we don't
queue work for historic sessions.
2024-12-30 06:16:03 -05:00
Luke Parker
445c49f030 Have the scanner's report task ensure handovers only occur if Batchs are valid
This is incomplete at this time. The logic is fine, but needs to be moved to a
distinct location to handle singular blocks which produce multiple Batches.
2024-12-30 06:11:47 -05:00
Luke Parker
5b74fc8ac1 Merge ExternalKeyForSessionToSignBatch into InfoForBatch 2024-12-30 05:34:13 -05:00
Luke Parker
e67e301fc2 Have the processor verify the published Batches match expectations 2024-12-30 05:21:26 -05:00
Luke Parker
1d50792eed Document serai-db with bounds and intent 2024-12-26 02:35:32 -05:00
Luke Parker
9c92709e62 Delay cosign acknowledgments 2024-12-26 01:04:20 -05:00
Luke Parker
3d15710a43 Only check the cosign is after its start block if faulty
We don't have consensus on the session's last block, so we shouldn't check if
the cosign is before the session ends. What matters is that network, within its
set, claims it's still active at that block (on its view of the blockchain).
2024-12-26 00:26:48 -05:00
Luke Parker
df06da5552 Only check if the cosign is stale if it isn't faulty
If it is faulty, we want to archive it regardless.
2024-12-26 00:24:48 -05:00
Luke Parker
cef5bc95b0 Revert prior commit
An archive of all GlobalSessions is necessary to check for faults. The storage
cost is also minimal. While it should be avoided if it can be, it can't be
here.
2024-12-26 00:15:49 -05:00
Luke Parker
f336ab1ece Remove GlobalSessions DB entry
If we read the currently-being-evaluated session from the evaluator, we can
avoid paying the storage costs on all sessions ad-infinitum.
2024-12-25 23:57:51 -05:00
Luke Parker
2aebfb21af Remove serai from the cosign evaluator 2024-12-25 23:47:21 -05:00
Luke Parker
56af6c44eb Remove usage of serai from intake_cosign 2024-12-25 21:19:04 -05:00
Luke Parker
4b34be05bf rocksdb 0.23 2024-12-25 19:48:48 -05:00
Luke Parker
5b337c3ce8 Prevent a malicious validator set from overwriting a notable cosign
Also prevents panics from an invalid Serai node (removing the assumption of an
honest Serai node).
2024-12-25 02:11:05 -05:00
Luke Parker
e119fb4c16 Replace Cosigns by extending NetworksLatestCosignedBlock
Cosigns was an archive of every single cosign ever received. By scoping
NetworksLatestCosignedBlock to be by the global session, we have the latest
cosign for each network in a session (valid to replace all prior cosigns by
that network within that session, even for the purposes of fault) and
automatically have the notable cosigns indexed (as they are the latest ones
within their session). This not only saves space yet also allows optimizing
evaluation a bit.
2024-12-25 01:45:37 -05:00
Luke Parker
ef972b2658 Add cosign signature verification 2024-12-25 00:06:46 -05:00
Luke Parker
4de1a5804d Dedicated library for intending and evaluating cosigns
Not only cleans the existing cosign code but enables non-Serai-coordinators to
evaluate cosigns if they gain access to a feed of them (such as over an RPC).
This would let centralized services not only track the finalized chain yet the
cosigned chain without directly running a coordinator.

Still being wrapped up.
2024-12-22 06:41:55 -05:00
Luke Parker
147a6e43d0 Split task from serai-processor-primitives into serai-task 2024-12-19 10:08:13 -05:00
Luke Parker
066aa9eda4 cargo update
Resolves RUSTSEC-2024-0421
2024-12-12 00:45:19 -05:00
Luke Parker
9593a428e3 alloy 0.8 2024-12-11 01:02:58 -05:00
Luke Parker
5b3c5ec02b Basic Ethereum escapeHatch test 2024-12-09 02:00:17 -05:00
Luke Parker
9ccfa8a9f5 Fix deny 2024-12-08 22:01:43 -05:00
Luke Parker
18897978d0 thiserror 2.0, cargo update 2024-12-08 21:55:37 -05:00
Luke Parker
3192370484 Add Serai key confirmation to prevent rotating to an unusable key
Also updates alloy to the latest version
2024-12-08 20:42:37 -05:00
Luke Parker
8013c56195 Add/correct msrv labels 2024-12-08 18:27:15 -05:00
Luke Parker
834c16930b Add a bitmask of OutInstruction events to Executed
Allows explorers to provide clarity on what occurred.
2024-11-02 21:00:01 -04:00
Luke Parker
2920987173 Add a re-entrancy guard to Router.execute 2024-11-02 20:12:48 -04:00
Luke Parker
26230377b0 Define IRouterWithoutCollisions which Router inherits from
This ensures Router implements most of IRouterWithoutCollisions. It solely
leaves us to confirm Router implements the extensions defined in IRouter.
2024-11-02 19:10:39 -04:00
Luke Parker
2f5c0c68d0 Add selector collisions to Router to make it IRouter compatible 2024-11-02 18:13:02 -04:00
Luke Parker
8de42cc2d4 Add IRouter 2024-11-02 13:19:07 -04:00
Luke Parker
cf4123b0f8 Update how signatures are handled by the Router 2024-11-02 10:47:09 -04:00
Luke Parker
6a520a7412 Work on testing the Router 2024-10-31 02:23:59 -04:00
Luke Parker
b2ec58a445 Update serai-ethereum-processor to compile 2024-10-30 21:48:40 -04:00
Luke Parker
8e800885fb Simplify deterministic signing process in serai-processor-ethereum-primitives
This should be easier to specify/do an alternative implementation of.
2024-10-30 21:36:31 -04:00
Luke Parker
2a427382f1 Natspec, slither Deployer, Router 2024-10-30 21:35:43 -04:00
Luke Parker
ce1689b325 Expand tests for ethereum-schnorr-contract 2024-10-28 18:08:31 -04:00
Luke Parker
0b61a75afc Add lint against string slicing
These are tricky as it panics if the slice doesn't hit a UTF-8 codepoint
boundary.
2024-10-02 21:58:48 -04:00
Luke Parker
2aee21e507 Fix decomposition -> divisor points vartime due to branch prediction/cache rules 2024-09-29 04:19:16 -04:00
Luke Parker
b3e003bd5d cargo +nightly fmt 2024-09-25 10:22:49 -04:00
Luke Parker
251a6e96e8 Constant-time divisors (#617)
* WIP constant-time implementation of the ec-divisors library

* Fix misc logic errors in poly.rs

* Remove accidentally committed test statements

* Fix ConstantTimeEq for CoefficientIndex

* Correct the iterations formula

x**3 / (0 y + x**1) would prior be considered indivisible with iterations = 0.
It is divisible however. The amount of iterations should be the amount of
coefficients within the numerator *excluding the coefficient for y**0 x**0*.

* Poly PartialEq, conditional_select_poly which checks poly structure equivalence

If the first passed argument is smaller than the latter, it's padded to the
necessary length.

Also adds code to trim the remainder as the remainder is the value modulo, so
it's very important it remains concise and workable.

* Fix the line function

It selected the case if both were identity before selecting the case if either
were identity, the latter overwriting the former.

* Final fixes re: ct_get

1) Our quotient structure does need to be of size equal to the numerator
   entirely to prevent out-of-bounds reads on it
2) We need to get from yx_coefficients if of length >=, so if the length is 1
   we can read y_pow=1 from it. If y_pow=0, and its length is 0 so it has no
   inner Vecs, we need to fall back with the guard y_pow != 0.

* Add a trim algorithm to lib.rs to prevent Polys from becoming unbearably gigantic

Our Poly algorithm is incredibly leaky. While it presumably should be improved,
we can take advantage of our known structure while constructing divisors (and
the small modulus) to simply trim out the zero coefficients leaked. This
maintains Polys in a manageable size.

* Move constant-time scalar mul gadget divisor creation from dkg to ec-divisors

Anyone creating a divisor for the scalar mul gadget should use constant time
code, so this code should at least be in the EC gadgets crate It's of
non-trivial complexity to deal with otherwise.

* Remove unsafe, cache timing attacks from ec-divisors
2024-09-24 17:27:05 -04:00
Luke Parker
2c8af04781 machete, drain > mem::swap for clarity reasons 2024-09-19 23:36:32 -07:00
Luke Parker
a0ed043372 Move old processor/src directory to processor/TODO 2024-09-19 23:36:32 -07:00
Luke Parker
2984d2f8cf Misc comments 2024-09-19 23:36:32 -07:00
Luke Parker
554c5778e4 Don't track deployment block in the Router
This technically has a TOCTOU where we sync an Epoch's metadata (signifying we
did sync to that point), then check if the Router was deployed, yet at that
very moment the node resets to genesis. By ensuring the Router is deployed, we
avoid this (and don't need to track the deployment block in-contract).

Also uses a JoinSet to sync the 32 blocks in parallel.
2024-09-19 23:36:32 -07:00
Luke Parker
7e4c59a0a3 Have the Router track its deployment block
Prevents a consensus split where some nodes would drop transfers if their node
didn't think the Router was deployed, and some would handle them.
2024-09-19 23:36:32 -07:00
Luke Parker
294462641e Don't have the ERC20 collapse the top-level transfer ID to the transaction ID
Uses the ID of the transfer event associated with the top-level transfer.
2024-09-19 23:36:32 -07:00
Luke Parker
ae76749513 Transfer ETH with CREATE, not prior to CREATE
Saves a few thousand gas.
2024-09-19 23:36:32 -07:00
Luke Parker
1e1b821d34 Report a Change Output with every Eventuality to ensure we don't fall out of synchrony 2024-09-19 23:36:32 -07:00
Luke Parker
702b4c860c Add dummy fee values to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bc1bbf9951 Set a fixed fee transferred to the caller for publication
Avoids the risk of the gas used by the contract exceeding the gas presumed to
be used (causing an insolvency).
2024-09-19 23:36:32 -07:00
Luke Parker
ec9211fd84 Remove accidentally included bitcoin feature from processor-bin 2024-09-19 23:36:32 -07:00
Luke Parker
4292660eda Have the Ethereum scheduler create Batches as necessary
Also introduces the fee logic, despite it being stubbed.
2024-09-19 23:36:32 -07:00
Luke Parker
8ea5acbacb Update the Router smart contract to pay fees to the caller
The caller is paid a fixed fee per unit of gas spent. That arguably
incentivizes the publisher to raise the gas used by internal calls, yet this
doesn't effect the user UX as they'll have flatly paid the worst-case fee
already. It does pose a risk where callers are arguably incentivized to cause
transaction failures which consume all the gas, not just increased gas, yet:

1) Modern smart contracts don't error by consuming all the gas
2) This is presumably infeasible
3) Even if it was feasible, the gas fees gained presumably exceed the gas fees
   spent causing the failure

The benefit to only paying the callers for the gas used, not the gas alotted,
is it allows Serai to build up a buffer. While this should be minor, a few
cents on every transaction at best, if we ever do have any costs slip through
the cracks, it ideally is sufficient to handle those.
2024-09-19 23:36:32 -07:00
Luke Parker
1b1aa74770 Correct forge fmt config 2024-09-19 23:36:32 -07:00
Luke Parker
861a8352e5 Update to the latest bitcoin-serai 2024-09-19 23:36:32 -07:00
Luke Parker
e64827b6d7 Mark files in TODO/ with "TODO" to ensure it pops up on search 2024-09-19 23:36:32 -07:00
Luke Parker
c27aaf8658 Merge BlockWithAcknowledgedBatch and BatchWithoutAcknowledgeBatch
Offers a simpler API to the coordinator.
2024-09-19 23:36:32 -07:00
Luke Parker
53567e91c8 Read NetworkId from ScannerFeed trait, not env 2024-09-19 23:36:32 -07:00
Luke Parker
1a08d50e16 Remove unused code in the Ethereum processor 2024-09-19 23:36:32 -07:00
Luke Parker
855e53164e Finish Ethereum ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
1367e41510 Add hooks to the main loop
Lets the Ethereum processor track the first key set as soon as it's set.
2024-09-19 23:36:32 -07:00
Luke Parker
a691be21c8 Call tidy_keys upon queue_key
Prevents the potential case of the substrate task and the scan task writing to
the same storage slot at once.
2024-09-19 23:36:32 -07:00
Luke Parker
673cf8fd47 Pass the latest active key to the Block's scan function
Effectively necessary for networks on which we utilize account abstraction in
order to know what key to associate the received coins with.
2024-09-19 23:36:32 -07:00
Luke Parker
118d81bc90 Finish the Ethereum TX publishing code 2024-09-19 23:36:32 -07:00
Luke Parker
e75c4ec6ed Explicitly add an unspendable script path to the processor's generated keys 2024-09-19 23:36:32 -07:00
Luke Parker
9e628d217f cargo fmt, move ScannerFeed from String to the RPC error 2024-09-19 23:36:32 -07:00
Luke Parker
a717ae9ea7 Have the TransactionPublisher build a TxLegacy from Transaction 2024-09-19 23:36:32 -07:00
Luke Parker
98c3f75fa2 Move the Ethereum Action machine to its own file 2024-09-19 23:36:32 -07:00
Luke Parker
18178f3764 Add note on the returned top-level transfers being unordered 2024-09-19 23:36:32 -07:00
Luke Parker
bdc3bda04a Remove ethereum-serai/serai-processor-ethereum-contracts
contracts was smashed out of ethereum-serai. Both have now been smashed into
individual crates.

Creates a TODO directory with left-over test code yet to be moved.
2024-09-19 23:36:32 -07:00
Luke Parker
433beac93a Ethereum SignableTransaction, Eventuality 2024-09-19 23:36:32 -07:00
Luke Parker
8f2a9301cf Don't have the router drop transactions which may have top-level transfers
The router will now match the top-level transfer so it isn't used as the
justification for the InInstruction it's handling. This allows the theoretical
case where a top-level transfer occurs (to any entity) and an internal call
performs a transfer to Serai.

Also uses a JoinSet for fetching transactions' top-level transfers in the ERC20
crate. This does add a dependency on tokio yet improves performance, and it's
scoped under serai-processor (which is always presumed to be tokio-based).
While we could instead import futures for join_all,
https://github.com/smol-rs/futures-lite/issues/6 summarizes why that wouldn't
be a good idea. While we could prefer async-executor over tokio's JoinSet,
JoinSet doesn't share the same issues as FuturesUnordered. That means our
question is solely if we want the async-executor executor or the tokio
executor, when we've already established the Serai processor is always presumed
to be tokio-based.
2024-09-19 23:36:32 -07:00
Luke Parker
d21034c349 Add calls to get the messages to sign for the router 2024-09-19 23:36:32 -07:00
Luke Parker
381495618c Trim dead code 2024-09-19 23:36:32 -07:00
Luke Parker
ee0efe7cde Don't have the Deployer store the deployment block
Also updates how re-entrancy is handled to a more efficient and portable
mechanism.
2024-09-19 23:36:32 -07:00
Luke Parker
7feb7aed22 Hash the message before the challenge function in the Schnorr contract
Slightly more efficient.
2024-09-19 23:36:32 -07:00
Luke Parker
cc75a92641 Smash out the router library 2024-09-19 23:36:32 -07:00
Luke Parker
a7d5640642 Smash ERC20 into its own library 2024-09-19 23:36:32 -07:00
Luke Parker
ae61f3d359 forge fmt 2024-09-19 23:36:32 -07:00
Luke Parker
4bcea31c2a Break Ethereum Deployer into crate 2024-09-19 23:36:32 -07:00
Luke Parker
eb9bce6862 Remove OutInstruction's data field
It makes sense for networks which support arbitrary data to do as part of their
address. This reduces the ability to perform DoSs, achieves better performance,
and better uses the type system (as now networks we don't support data on don't
have a data field).

Updates the Ethereum address definition in serai-client accordingly
2024-09-19 23:36:32 -07:00
Luke Parker
39be23d807 Remove artifacts for serai-processor-ethereum-contracts 2024-09-19 23:36:32 -07:00
Luke Parker
3f0f4d520d Remove the Sandbox contract
If instead of intaking calls, we intake code, we can deploy a fresh contract
which makes arbitrary calls *without* attempting to build our abstraction
layer over the concept.

This should have the same gas costs, as we still have one contract deployment.
The new contract only has a constructor, so it should have no actual code and
beat the Sandbox in that regard? We do have to call into ourselves to meter the
gas, yet we already had to call into the deployed Sandbox to achieve that.

Also re-defines the OutInstruction to include tokens, implements
OutInstruction-specified gas amounts, bumps the Solidity version, and other
such misc changes.
2024-09-19 23:36:32 -07:00
Luke Parker
80ca2b780a Add tests for the premise of the Schnorr contract to the Schnorr crate 2024-09-19 23:36:32 -07:00
Luke Parker
0813351f1f OUT_DIR > artifacts 2024-09-19 23:36:32 -07:00
Luke Parker
a38d135059 rust-toolchain 1.81 2024-09-19 23:36:32 -07:00
Luke Parker
67f9f76fdf Remove publish = false 2024-09-19 23:36:32 -07:00
Luke Parker
1c5bc2259e Dedicated crate for the Schnorr contract 2024-09-19 23:36:32 -07:00
Luke Parker
bdf89f5350 Add dedicated crate for building Solidity contracts 2024-09-19 23:36:32 -07:00
Luke Parker
239127aae5 Add crate for the Ethereum contracts 2024-09-19 23:36:32 -07:00
Luke Parker
d9543bee40 Move ethereum-serai under the processor
It isn't generally usable and should be directly integrated at this point.
2024-09-19 23:36:32 -07:00
Luke Parker
8746b54a43 Don't use a different address for DAI in test
anvil will let us deploy to the existing address.
2024-09-19 23:36:32 -07:00
Luke Parker
7761798a78 Outline the Ethereum processor
This was only half-finished to begin with, unfortunately...
2024-09-19 23:36:32 -07:00
Luke Parker
72a18bf8bb Smart Contract Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0616085109 Monero Planner
Finishes the Monero processor.
2024-09-19 23:36:32 -07:00
Luke Parker
e23176deeb Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-19 23:36:32 -07:00
Luke Parker
5551521e58 Tighten documentation on Block::number 2024-09-19 23:36:32 -07:00
Luke Parker
a2d9aeaed7 Stub out Scheduler in the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
e1ad897f7e Allow scheduler's creation of transactions to be async and error
I don't love this, but it's the only way to select decoys without using a local
database. While the prior commit added such a databse, the performance of it
presumably wasn't viable, and while TODOs marked the needed improvements, it
was still messy with an immense scope re: any auditing.

The relevant scheduler functions now take `&self` (intentional, as all
mutations should be via the `&mut impl DbTxn` passed). The calls to `&self` are
expected to be completely deterministic (as usual).
2024-09-19 23:36:32 -07:00
Luke Parker
2edc2f3612 Add a database of all Monero outs into the processor
Enables synchronous transaction creation (which requires synchronous decoy
selection).
2024-09-19 23:36:32 -07:00
Luke Parker
e56af7fc51 Monero time_for_block, dust 2024-09-19 23:36:32 -07:00
Luke Parker
947e1067d9 Monero Processor scan, check_for_eventuality_resolutions 2024-09-19 23:36:32 -07:00
Luke Parker
b4e94f3d51 cargo fmt signers/scanner 2024-09-19 23:36:32 -07:00
Luke Parker
1b39138472 Define subaddress indexes to use
(1, 0) is the external address. (2, *) are the internal addresses.
2024-09-19 23:36:32 -07:00
Luke Parker
e78236276a Remove async-trait from processor/
Part of https://github.com/serai-dex/issues/607.
2024-09-19 23:36:32 -07:00
Luke Parker
2c4c33e632 Misc continuances on the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
02409c5735 Correct Multisig Rotation to use WINDOW_LENGTH where proper 2024-09-19 23:36:32 -07:00
Luke Parker
f2cf03cedf Monero processor primitives 2024-09-19 23:36:32 -07:00
Luke Parker
0d4c8cf032 Use a local DB channel for sending to the message-queue
The provided message-queue queue functions runs unti it succeeds. This means
sending to the message-queue will no longer potentially block for arbitrary
amount of times as sending messages is just writing them to a DB.
2024-09-19 23:36:32 -07:00
Luke Parker
b6811f9015 serai-processor-bin
Moves the coordinator loop out of serai-bitcoin-processor, completing it.

Fixes a potential race condition in the message-queue regarding multiple
sockets sending messages at once.
2024-09-19 23:36:32 -07:00
Luke Parker
fcd5fb85df Add binary search to find the block to start scanning from 2024-09-19 23:36:32 -07:00
Luke Parker
3ac0265f07 Add section documenting the safety of txindex upon reorganizations 2024-09-19 23:36:32 -07:00
Luke Parker
9b8c8f8231 Misc tidying of serai-db calls 2024-09-19 23:36:32 -07:00
Luke Parker
59fa49f750 Continue filling out main loop
Adds generics to the db_channel macro, fixes the bug where it needed at least
one key.
2024-09-19 23:36:32 -07:00
Luke Parker
723f529659 Note better message structure in messages 2024-09-19 23:36:32 -07:00
Luke Parker
73af09effb Add note to signers on reducing disk IO 2024-09-19 23:36:32 -07:00
Luke Parker
4054e44471 Start on the new processor main loop 2024-09-19 23:36:32 -07:00
Luke Parker
a8159e9070 Bitcoin Key Gen 2024-09-19 23:36:32 -07:00
Luke Parker
b61ba9d1bb Adjust Bitcoin processor layout 2024-09-19 23:36:32 -07:00
Luke Parker
776cbbb9a4 Misc changes in response to prior two commits 2024-09-19 23:36:32 -07:00
Luke Parker
76a3f3ec4b Add an anyone-can-pay output to every Bitcoin transaction
Resolves #284.
2024-09-19 23:36:32 -07:00
Luke Parker
93c7d06684 Implement presumed_origin
Before we yield a block for scanning, we save all of the contained script
public keys. Then, when we want the address credited for creating an output,
we read the script public key of the spent output from the database.

Fixes #559.
2024-09-19 23:36:32 -07:00
Luke Parker
4cb838e248 Bitcoin processor lib.rs -> main.rs 2024-09-19 23:36:32 -07:00
Luke Parker
c988b7cdb0 Bitcoin TransactionPublisher 2024-09-19 23:36:32 -07:00
Luke Parker
017aab2258 Satisfy Scheduler for Bitcoin 2024-09-19 23:36:32 -07:00
Luke Parker
ba3a6f9e91 Bitcoin ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
e36b671f37 Remove bound that WINDOW_LENGTH < CONFIRMATIONS
It's unnecessary and not valuable.
2024-09-19 23:36:32 -07:00
Luke Parker
2d4b775b6e Add bitcoin Block trait impl 2024-09-19 23:36:32 -07:00
Luke Parker
247cc8f0cc Bitcoin Output/Transaction definitions 2024-09-19 23:36:32 -07:00
Luke Parker
0ccf71df1e Remove old signer impls 2024-09-19 23:36:32 -07:00
Luke Parker
8aba71b9c4 Add CosignerTask to signers, completing it 2024-09-19 23:36:32 -07:00
Luke Parker
46c12c0e66 SlashReport signing and signature publication 2024-09-19 23:36:32 -07:00
Luke Parker
3cc7b49492 Strongly type SlashReport, populate cosign/slash report tasks with work 2024-09-19 23:36:32 -07:00
Luke Parker
0078858c1c Tidy messages, publish all Batches to the coordinator
Prior, we published SignedBatches, yet Batches are necessary for auditing
purposes.
2024-09-19 23:36:32 -07:00
Luke Parker
a3cb514400 Have the coordinator task publish Batches 2024-09-19 23:36:32 -07:00
Luke Parker
ed0221d804 Add BatchSignerTask
Uses a wrapper around AlgorithmMachine Schnorrkel to let the message be &[].
2024-09-19 23:36:32 -07:00
Luke Parker
4152bcacb2 Replace scanner's BatchPublisher with a pair of DB channels 2024-09-19 23:36:32 -07:00
Luke Parker
f07ec7bee0 Route the coordinator, fix race conditions in the signers library 2024-09-19 23:36:32 -07:00
Luke Parker
7484eadbbb Expand task management
These extensions are necessary for the signers task management.
2024-09-19 23:36:32 -07:00
Luke Parker
59ff944152 Work on the higher-level signers API 2024-09-19 23:36:32 -07:00
Luke Parker
8f848b1abc Tidy transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
100c80be9f Finish transaction signing task with TX rebroadcast code 2024-09-19 23:36:32 -07:00
Luke Parker
a353f9e2da Further work on transaction signing 2024-09-19 23:36:32 -07:00
Luke Parker
b62fc3a1fa Minor work on the transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
8380653855 Add empty serai-processor-signers library
This will replace the signers still in the monolithic Processor binary.
2024-09-19 23:36:32 -07:00
Luke Parker
b50b889918 Split processor into bitcoin-processor, ethereum-processor, monero-processor 2024-09-19 23:36:32 -07:00
Luke Parker
d570c1d277 Move additional_key.rs to serai-processor-view-keys
I don't love this. I wanted to simply add this function to `processor/key-gen`,
but then anyone who wants a view key needs to pull in Bulletproofs which is a
mess of code. They'd also be subject to an AGPL licensed library.

This is so small it should be a primitive elsewhere, yet there is no primitives
library eligible. Maybe serai-client since that has the code to make
transactions to Serai (and will have this as a dependency)? Except then the
processor has to import serai-client when this rewrite removed it as a
dependency.
2024-09-19 23:36:32 -07:00
Luke Parker
2da24506a2 Remove vast swaths of legacy code in the processor 2024-09-19 23:36:32 -07:00
Luke Parker
6e9cb74022 Add non-transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0c1aec29bb Finish routing output flushing
Completes the transaction-chaining scheduler.
2024-09-19 23:36:32 -07:00
Luke Parker
653ead1e8c Finish the tree logic in the transaction-chaining scheduler
Also completes the DB functions, makes Scheduler never instantiated, and
ensures tree roots have change outputs.
2024-09-19 23:36:32 -07:00
Luke Parker
8ff019265f Near-complete version of the tree algorithm in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0601d47789 Work on the tree logic in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
ebef38d93b Ensure the transaction-chaining scheduler doesn't accumulate the same output multiple times 2024-09-19 23:36:32 -07:00
Luke Parker
75b4707002 Add input aggregation in the transaction-chaining scheduler
Also handles some other misc in it.
2024-09-19 23:36:32 -07:00
Luke Parker
3c787e005f Fix bug in the scanner regarding forwarded output amounts
We'd report the amount originally received, minus 2x the cost to aggregate,
regardless the amount successfully forwarded. We should've reduced to the
amount successfully forwarded, if it was smaller, in case the cost to
forward exceeded the aggregation cost.
2024-09-19 23:36:32 -07:00
Luke Parker
f11a6b4ff1 Better document the forwarded output flow 2024-09-19 23:36:32 -07:00
Luke Parker
fadc88d2ad Add scheduler-primitives
The main benefit is whatever scheduler is in use, we now have a single API to
receive TXs to sign (which is of value to the TX signer crate we'll inevitably
build).
2024-09-19 23:36:32 -07:00
Luke Parker
c88ebe985e Outline of the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
6deb60513c Expand primitives/scanner with niceties needed for the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bd277e7032 Add processor/scheduler/utxo/primitives
Includes the necessary signing functions and the fee amortization logic.

Moves transaction-chaining to utxo/transaction-chaining.
2024-09-19 23:36:32 -07:00
Luke Parker
fc765bb9e0 Add crate for the transaction-chaining Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
13b74195f7 Don't have acknowledge_batch immediately run
`acknowledge_batch` can only be run if we know what the Batch should be. If we
don't know what the Batch should be, we have to block until we do.
Specifically, we need the block number associated with the Batch.

Instead of blocking over the Scanner API, the Scanner API now solely queues
actions. A new task intakes those actions once we can. This ensures we can
intake the entire Substrate chain, even if our daemon for the external network
is stalled at its genesis block.

All of this for the block number alone seems ridiculous. To go from the block
hash in the Batch to the block number without this task, we'd at least need the
index task to be up to date (still requiring blocking or an API returning
ephemeral errors).
2024-09-19 23:36:32 -07:00
Luke Parker
f21838e0d5 Replace acknowledge_block with acknowledge_batch 2024-09-19 23:36:32 -07:00
Luke Parker
76cbe6cf1e Have acknowledge_block take in the results of the InInstructions executed
If any failed, the scanner now creates a Burn for the return.
2024-09-19 23:36:32 -07:00
Luke Parker
5999f5d65a Route the DB w.r.t. forwarded outputs' information 2024-09-19 23:36:32 -07:00
Luke Parker
d429a0bae6 Remove unused ID -> number lookup 2024-09-19 23:36:32 -07:00
Luke Parker
775824f373 Impl ScanData serialization in the DB 2024-09-19 23:36:32 -07:00
Luke Parker
41a74cb513 Check a queued key has never been queued before
Re-queueing should only happen with a malicious supermajority and breaks
indexing by the key.
2024-09-19 23:36:32 -07:00
Luke Parker
e26da1ec34 Have the Eventuality task drop outputs which aren't ours and aren't worth it to aggregate
We could drop these entirely, yet there's some degree of utility to be able to
add coins to Serai in this manner.
2024-09-19 23:36:32 -07:00
Luke Parker
7266e7f7ea Add note on why LifetimeStage is monotonic 2024-09-19 23:36:32 -07:00
Luke Parker
a8b9b7bad3 Add sanity checks we haven't prior reported an InInstruction for/accumulated an output 2024-09-19 23:36:32 -07:00
Luke Parker
2ca7fccb08 Pass the lifetime information to the scheduler
Enables it to decide which keys to use for fulfillment/change.
2024-09-19 23:36:32 -07:00
Luke Parker
4f6d91037e Call flush_key 2024-09-19 23:36:32 -07:00
Luke Parker
8db76ed67c Add key management to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
920303e1b4 Add helper to intake Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
9f4b28e5ae Clarify output-to-self to output-to-Serai
There's only the requirement it's to an active key which is being reported for.
2024-09-19 23:36:32 -07:00
Luke Parker
f9d02d43c2 Route burns through the scanner 2024-09-19 23:36:32 -07:00
Luke Parker
8ac501028d Add API to publish Batches with
This doesn't have to be abstract, we can generate the message and use the
message-queue API, yet this should help with testing.
2024-09-19 23:36:32 -07:00
Luke Parker
612c67c537 Cache the cost to aggregate 2024-09-19 23:36:32 -07:00
Luke Parker
04a971a024 Fill in various DB functions 2024-09-19 23:36:32 -07:00
Luke Parker
738636c238 Have Scanner::new spawn tasks 2024-09-19 23:36:32 -07:00
Luke Parker
65f3f48517 Add ReportDb 2024-09-19 23:36:32 -07:00
Luke Parker
7cc07d64d1 Make report.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
fdfe520f9d Add ScanDb 2024-09-19 23:36:32 -07:00
Luke Parker
77ef25416b Make scan.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7c1025dbcb Implement key retiry 2024-09-19 23:36:32 -07:00
Luke Parker
a771fbe1c6 Logs, documentation, misc 2024-09-19 23:36:32 -07:00
Luke Parker
9cebdf7c68 Add sorts for safety even upon non-determinism 2024-09-19 23:36:32 -07:00
Luke Parker
75251f04b4 Use a channel for the InInstructions
It's still unclear how we'll handle refunding failed InInstructions at this
time. Presumably, extending the InInstruction channel with the associated
output ID?
2024-09-19 23:36:32 -07:00
Luke Parker
6196642beb Add a DbChannel between scan and eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
2bddf00222 Don't expose IndexDb throughout the crate 2024-09-19 23:36:32 -07:00
Luke Parker
9ab8ba0215 Add dedicated Eventuality DB and stub missing fns 2024-09-19 23:36:32 -07:00
Luke Parker
33e0c85f34 Make Eventuality a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
1e8f4e6156 Make a dedicated IndexDb 2024-09-19 23:36:32 -07:00
Luke Parker
66f3428051 Make index a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7e71840822 Add helper methods
Has fetched blocks checked to be the indexed blocks. Has scanned outputs be
sorted, meaning they aren't subject to implicit order/may be non-deterministic
(such as if handled by a threadpool).
2024-09-19 23:36:32 -07:00
Luke Parker
b65dbacd6a Move ContinuallyRan into primitives
I'm unsure where else it'll be used within the processor, yet it's generally
useful and I don't want to make a dedicated crate yet.
2024-09-19 23:36:32 -07:00
Luke Parker
2fcd9530dd Add a callback to accumulate outputs and return the new Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
379780a3c9 Flesh out eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
945f31dfc7 Have the scan flag blocks with change/branch/forwarded as notable 2024-09-19 23:36:32 -07:00
Luke Parker
d5d1fc3eea Flesh out report task 2024-09-19 23:36:32 -07:00
Luke Parker
fd12cc0213 Finish scan task 2024-09-19 23:36:32 -07:00
Luke Parker
ce805c8cc8 Correct compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
bc0cc5a754 Decide flow between scan/eventuality/report
Scan now only handles External outputs, with an associated essay going over
why. Scan directly creates the InInstruction (prior planned to be done in
Report), and Eventuality is declared to end up yielding the outputs.

That will require making the Eventuality flow two-stage. One stage to evaluate
existing Eventualities and yield outputs, and one stage to incorporate new
Eventualities before advancing the scan window.
2024-09-19 23:36:32 -07:00
Luke Parker
f2ee4daf43 Add Eventuality back to processor primitives
Also splits crate into modules.
2024-09-19 23:36:32 -07:00
Luke Parker
4e29678799 Add bounds for the eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
74d3075dae Document expectations on Eventuality task and correct code determining the block safe to scan/report 2024-09-19 23:36:32 -07:00
Luke Parker
155ad48f4c Handle dust 2024-09-19 23:36:32 -07:00
Luke Parker
951872b026 Differentiate BlockHeader from Block 2024-09-19 23:36:32 -07:00
Luke Parker
2b47feafed Correct misc compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
a2717d73f0 Flesh out new scanner a bit more
Adds the task to mark blocks safe to scan, and outlines the task to report
blocks.
2024-09-19 23:36:32 -07:00
Luke Parker
8763ef23ed Definition and delineation of tasks within the scanner
Also defines primitives for the processor.
2024-09-19 23:36:32 -07:00
Luke Parker
57a0ba966b Extend serai-db with support for generic keys/values 2024-09-19 23:36:32 -07:00
Luke Parker
e843b4a2a0 Move scanner.rs to scanner/lib.rs 2024-09-19 23:36:32 -07:00
Luke Parker
2f3bd7a02a Cleanup DB handling a bit in key-gen/attempt-manager 2024-09-19 23:36:32 -07:00
Luke Parker
1e8a9ec5bd Smash out the signer
Abstract, to be done for the transactions, the batches, the cosigns, the slash
reports, everything. It has a minimal API itself, intending to be as clear as
possible.
2024-09-19 23:36:32 -07:00
Luke Parker
2f29c91d30 Smash key-gen out of processor
Resolves some bad assumptions made regarding keys being unique or not.
2024-09-19 23:36:32 -07:00
Luke Parker
f3b91bd44f Smash key-gen into independent crate 2024-09-19 23:36:32 -07:00
Luke Parker
e4e4245ee3 One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
2024-09-19 21:43:26 -04:00
Luke Parker
669b2fef72 Remove test_tweak_keys
What it tests no longer applies since tweak_keys now introduces an unspendable
script path.
2024-09-19 21:43:00 -04:00
Luke Parker
3af430d8de Use the IETF transacript in bitcoin-serai, not RecommendedTranscript
This is more likely to be interoperable in the long term.
2024-09-19 21:13:08 -04:00
Luke Parker
dfb5a053ae Resolve #611 2024-09-19 20:58:33 -04:00
Luke Parker
bdcc061bb4 Add ScannableBlock abstraction in the RPC
Makes scanning synchronous and only error upon a malicious node/unplanned for
hard fork.
2024-09-13 04:38:49 -04:00
Luke Parker
2c7148d636 Add machete exception for monero-clsag to monero-wallet 2024-09-13 02:39:43 -04:00
Luke Parker
6b270bc6aa Remove async-trait from monero-rpc 2024-09-13 02:36:53 -04:00
Luke Parker
875c669a7a Remove monero-serai multisig for just monero-[clsag, wallet] multisig 2024-09-12 18:41:35 -04:00
Luke Parker
0d399ecb28 Remove unused error in monero-address 2024-09-12 18:41:35 -04:00
Luke Parker
88440807e1 Monero v0.18.3.4 (#605)
* Monero v0.18.3.4

* Correct `check_weight_and_fee` call

* Restore empty test files so CI isn't borked
2024-09-06 01:43:31 -04:00
Luke Parker
c1a9256cc5 dockertest 0.5, correct errors from prior update commit 2024-09-05 23:31:45 -04:00
Luke Parker
0d5756ffcf cargo update, upgrade alloy
Removes a dated proc-macro-crate patch.
2024-09-05 17:03:23 -04:00
Luke Parker
ac7b98daac Remove tokio dependency from tendermint-machine
Indirects it via a minimal wrapper which can be trivially patched.
2024-09-05 16:30:27 -04:00
Luke Parker
efc7d70ab1 Clarify when wallet2 will decrypt payment IDs with citations 2024-09-05 15:50:36 -04:00
Luke Parker
4e834873d3 Lints from latest nightly
We can't adopt it due to some issue with building the runtime, but these are
good to have.
2024-09-01 16:33:44 -04:00
akildemir
a506d74d69 move economic security into it's own pallet (#596)
* move economic security into it's own pallet

* fix deny

* Update Cargo.toml, .github for the new crates

* Remove unused import

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-31 18:55:42 -04:00
Boog900
394db44b30 Monero: fix signature hash for V1 txs (#598)
* fix signature hash for V1 txs

* fix CI
2024-08-23 20:34:54 -04:00
akildemir
a2df54dd6a merge genesis complete block with genesis ended 2024-08-15 08:15:40 -07:00
akildemir
efc45c391b update emissions pallet author email 2024-08-15 08:12:47 -07:00
akildemir
cccc1fc7e6 Implement block emissions (#551)
* add genesis liquidity implementation

* add missing deposit event

* fix CI issues

* minor fixes

* make math safer

* fix fmt

* implement block emissions

* make remove liquidity an authorized call

* implement setting initial values for coins

* add genesis liquidity test & misc fixes

* updato develop latest

* fix rotation test

* fix licencing

* add fast-epoch feature

* only create the pool when adding liquidity first time

* add initial reward era test

* test whole pre ec security emissions

* fix clippy

* add swap-to-staked-sri feature

* rebase changes

* fix tests

* Remove accidentally commited ETH ABI files

* fix some pr comments

* Finish up fixing pr comments

* exclude SRI from is_allowed check

* Misc changes

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-14 23:12:04 -04:00
akildemir
bf1c493d9a add missing prevotes (#590)
* add missing prevotes

* remove the TODO

* add missing current step checks

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
2024-08-14 15:00:48 -04:00
Luke Parker
3de1e4dee2 Remove stray file in docs/ 2024-08-05 06:52:15 -04:00
Luke Parker
2591b5ade9 Update Gemfile.lock to silence a rexml disclosure 2024-08-03 02:57:56 -04:00
akildemir
e6620963c7 update author email 2024-08-02 01:50:33 -07:00
Luke Parker
d5205ce231 Update dependencies
Resolves a yanked version of bytemuck.
2024-08-01 04:06:09 -04:00
Luke Parker
0f6878567f Remove a pair of unused structs/deps
Caught by the most recent nightly.
2024-08-01 01:36:10 -04:00
Luke Parker
880565cb81 Rust 1.80
Preserves the fn accessors within the Monero crates so that we can use statics
in some cfgs yet not all (in order to provide support for more low-memory
devices) with the exception of `H` (which truly should be cached).
2024-07-26 19:28:10 -07:00
Luke Parker
6f34c2ff77 Remove unused git allowance for monero-rs 2024-07-19 23:51:05 -04:00
akildemir
1493f49416 Implement genesis liquidity protocol (#545)
* add genesis liquidity implementation

* add missing deposit event

* fix CI issues

* minor fixes

* make math safer

* fix fmt

* make remove liquidity an authorized call

* implement setting initial values for coins

* add genesis liquidity test & misc fixes

* updato develop latest

* fix rotation test

* Finish merging develop

* Remove accidentally committed ETH files

* fix pr comments

* further bug fixes

* fix last pr comments

* tidy up

* Misc

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-07-18 19:30:19 -04:00
Luke Parker
2ccb0cd90d Correct version of ruby update is run with
Hopefully finally resolves the site build failures.
2024-07-18 16:47:59 -04:00
Luke Parker
b33a6487aa Rename DKG specified in FROST from FROST to PedPoP 2024-07-18 16:41:31 -04:00
Luke Parker
491500057b Update Ruby version used in GH workflow 2024-07-18 16:09:01 -04:00
Luke Parker
d9f85fab26 Update lockfiles
Resolves a dependabot alert about the Ruby used to generate the docs site.
2024-07-18 15:18:08 -04:00
Luke Parker
7d2d739042 Rename the coins folder to networks (#583)
* Rename the coins folder to networks

Ethereum isn't a coin. It's a network.

Resolves #357.

* More renames of coins -> networks in orchestration

* Correct paths in tests/

* cargo fmt
2024-07-18 15:16:45 -04:00
akildemir
40cc180853 add transaction and crypto unit tests 2024-07-17 16:26:31 -07:00
Luke Parker
2aac6f6998 Improve usage of constants in coordinator p2p 2024-07-17 06:54:54 -04:00
Luke Parker
149c2a4437 Use non-pruned nodes in verify-chain 2024-07-17 06:54:26 -04:00
Luke Parker
e772b8a5f7 #560 take two, now that #560 has been reverted (#561)
* Clear upons upon round, not block

* Cache the proposal for a round

* Rebase onto develop, which reverted this PR, and re-apply this PR

* Set participation upon participation instead of constantly recalculating

* Cache message instances

* Add missing txn commit

Identified by @akildemir.

* Correct clippy lint identified upon rebase

* Fix tendermint chain sync (#581)

* fix p2p Reqres protocol

* stabilize tributary chain sync

* fix pr comments

---------

Co-authored-by: akildemir <34187742+akildemir@users.noreply.github.com>
2024-07-16 19:42:15 -04:00
Luke Parker
c0200df75a Add missing feature flag to dalek-ff-group 2024-07-15 21:50:43 -04:00
Luke Parker
9955ef54a5 Apply bitcoin fee per vsize, not per weight unit
This enables more precision.
2024-07-15 17:37:04 -07:00
Luke Parker
8e7e61adbd Respect maximum amount of outs per request 2024-07-14 20:28:10 -04:00
Luke Parker
0cb24dde02 cargo update
Resolves failing deny.
2024-07-14 20:27:36 -04:00
Luke Parker
97bfb183e8 Correct typo in coordinator
Identified by akil a while ago.
2024-07-14 19:35:45 -04:00
Luke Parker
85fc31fd82 Have monero-wallet use Transaction<Pruned>, not Transaction 2024-07-14 19:30:50 -04:00
Luke Parker
7b8bcae396 Add support for pruned transactions to monero-serai 2024-07-13 00:29:02 -04:00
Luke Parker
70fe52437c Have RPC tests run sequentially
Also corrects links pointing to branches to point to commits.
2024-07-12 22:09:46 -04:00
Luke Parker
ba657e23d1 Have a public monero-rpc type be properly formatted
It was public as the raw RPC response. It's more polite to handle the
formatting in the RPC, and allows us to return a better structure.
2024-07-12 04:14:05 -04:00
Luke Parker
32c24917c4 Correct tests which should've failed to expect failures now that they fail 2024-07-12 03:09:48 -04:00
Luke Parker
4ba961b2cb Cite source for obscure wallet protocol rules 2024-07-12 02:19:21 -04:00
Luke Parker
c59be46e2f Optimize Monero BPs 2024-07-12 02:18:57 -04:00
Luke Parker
2c165e19ae Bitcoin 27.1 2024-07-12 02:18:43 -04:00
Luke Parker
ee10692b23 Fix handling of output distribution
We prior didn't handle how the output distribution only starts after a specific
block.
2024-07-11 18:06:51 -04:00
Luke Parker
7a68b065e0 Redo the Bulletproofs impl
Uses the IP-impl from the FCMP++ work.
2024-07-10 21:05:23 -04:00
Luke Parker
3ddf1eec0c Fix no-std builds for monero-wallet 2024-07-09 02:17:57 -04:00
Luke Parker
84f0e6c26e Add additional documentation 2024-07-08 20:33:00 -04:00
Luke Parker
5bb3256d1f Support subaddresses as change outputs 2024-07-08 20:00:21 -04:00
Luke Parker
774424b70b Differentiate Rpc from DecoyRpc
Enables using a locally backed decoy DB.
2024-07-08 18:14:56 -04:00
Luke Parker
ed662568e2 Clean decoy selection code 2024-07-08 02:51:06 -04:00
Luke Parker
b744ac9a76 Clean decoy selection 2024-07-08 02:38:01 -04:00
Luke Parker
d7f7f69738 Remove the DecoySelection trait 2024-07-08 00:30:42 -04:00
Luke Parker
a2c3aba82b Clean the Monero lib for auditing (#577)
* Remove unsafe creation of dalek_ff_group::EdwardsPoint in BP+

* Rename Bulletproofs to Bulletproof, since they are a single Bulletproof

Also bifurcates prove with prove_plus, and adds a few documentation items.

* Make CLSAG signing private

Also adds a bit more documentation and does a bit more tidying.

* Remove the distribution cache

It's a notable bandwidth/performance improvement, yet it's not ready. We need a
dedicated Distribution struct which is managed by the wallet and passed in.
While we can do that now, it's not currently worth the effort.

* Tidy Borromean/MLSAG a tad

* Remove experimental feature from monero-serai

* Move amount_decryption into EncryptedAmount::decrypt

* Various RingCT doc comments

* Begin crate smashing

* Further documentation, start shoring up API boundaries of existing crates

* Document and clean clsag

* Add a dedicated send/recv CLSAG mask struct

Abstracts the types used internally.

Also moves the tests from monero-serai to monero-clsag.

* Smash out monero-bulletproofs

Removes usage of dalek-ff-group/multiexp for curve25519-dalek.

Makes compiling in the generators an optional feature.

Adds a structured batch verifier which should be notably more performant.

Documentation and clean up still necessary.

* Correct no-std builds for monero-clsag and monero-bulletproofs

* Tidy and document monero-bulletproofs

I still don't like the impl of the original Bulletproofs...

* Error if missing documentation

* Smash out MLSAG

* Smash out Borromean

* Tidy up monero-serai as a meta crate

* Smash out RPC, wallet

* Document the RPC

* Improve docs a bit

* Move Protocol to monero-wallet

* Incomplete work on using Option to remove panic cases

* Finish documenting monero-serai

* Remove TODO on reading pseudo_outs for AggregateMlsagBorromean

* Only read transactions with one Input::Gen or all Input::ToKey

Also adds a helper to fetch a transaction's prefix.

* Smash out polyseed

* Smash out seed

* Get the repo to compile again

* Smash out Monero addresses

* Document cargo features

Credit to @hinto-janai for adding such sections to their work on documenting
monero-serai in #568.

* Fix deserializing v2 miner transactions

* Rewrite monero-wallet's send code

I have yet to redo the multisig code and the builder. This should be much
cleaner, albeit slower due to redoing work.

This compiles with clippy --all-features. I have to finish the multisig/builder
for --all-targets to work (and start updating the rest of Serai).

* Add SignableTransaction Read/Write

* Restore Monero multisig TX code

* Correct invalid RPC type def in monero-rpc

* Update monero-wallet tests to compile

Some are _consistently_ failing due to the inputs we attempt to spend being too
young. I'm unsure what's up with that. Most seem to pass _consistently_,
implying it's not a random issue yet some configuration/env aspect.

* Clean and document monero-address

* Sync rest of repo with monero-serai changes

* Represent height/block number as a u32

* Diversify ViewPair/Scanner into ViewPair/GuaranteedViewPair and Scanner/GuaranteedScanner

Also cleans the Scanner impl.

* Remove non-small-order view key bound

Guaranteed addresses are in fact guaranteed even with this due to prefixing key
images causing zeroing the ECDH to not zero the shared key.

* Finish documenting monero-serai

* Correct imports for no-std

* Remove possible panic in monero-serai on systems < 32 bits

This was done by requiring the system's usize can represent a certain number.

* Restore the reserialize chain binary

* fmt, machete, GH CI

* Correct misc TODOs in monero-serai

* Have Monero test runner evaluate an Eventuality for all signed TXs

* Fix a pair of bugs in the decoy tests

Unfortunately, this test is still failing.

* Fix remaining bugs in monero-wallet tests

* Reject torsioned spend keys to ensure we can spend the outputs we scan

* Tidy inlined epee code in the RPC

* Correct the accidental swap of stagenet/testnet address bytes

* Remove unused dep from processor

* Handle Monero fee logic properly in the processor

* Document v2 TX/RCT output relation assumed when scanning

* Adjust how we mine the initial blocks due to some CI test failures

* Fix weight estimation for RctType::ClsagBulletproof TXs

* Again increase the amount of blocks we mine prior to running tests

* Correct the if check about when to mine blocks on start

Finally fixes the lack of decoy candidates failures in CI.

* Run Monero on Debian, even for internal testnets

Change made due to a segfault incurred when locally testing.

https://github.com/monero-project/monero/issues/9141 for the upstream.

* Don't attempt running tests on the verify-chain binary

Adds a minimum XMR fee to the processor and runs fmt.

* Increase minimum Monero fee in processor

I'm truly unsure why this is required right now.

* Distinguish fee from necessary_fee in monero-wallet

If there's no change, the fee is difference of the inputs to the outputs. The
prior code wouldn't check that amount is greater than or equal to the necessary
fee, and returning the would-be change amount as the fee isn't necessarily
helpful.

Now the fee is validated in such cases and the necessary fee is returned,
enabling operating off of that.

* Restore minimum Monero fee from develop
2024-07-07 06:57:18 -04:00
Luke Parker
703c6a2358 Typo corrections meant to be added in the prior commit 2024-07-04 02:13:34 -04:00
Luke Parker
52bb918cc9 Ensure TotalAllocatedStake is set for the first set 2024-07-04 02:11:04 -04:00
GitHub Actions
ba244e8090 Update nightly 2024-07-02 00:43:14 -04:00
akildemir
3e99d68cfe fix total allocated stake update in wrong time (#518)
* fix total allocated stake update in wrong time

* Restore mid-set increases

* Correct typo I introduced

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-06-24 07:41:25 -04:00
akildemir
4d9c2df38c Add coordinator rotation test (#535)
* add node side unit test

* complete rotation test for all networks

* set up the fast-epoch docker file

* fix pr comments

* add coordinator side rotation test

* bug fixes

* Remove EPOCH_INTERVAL

* Minor nits

* Add note on origin of publish_tx function in tests/coordinator

* Correct ThresholdParams assert_eq

* fmt

* Correct detection of handover completion

* Restore key gen message match from develop

It was modified in response to the handover completion bug, which has now been
resolved.

* bug fixes

* Correct invalid constant

* Typo fixes

* remove selecting participant to remove at random

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-06-21 08:39:17 -04:00
Luke Parker
8ab6f9c36e alloy 0.1 2024-06-19 12:39:47 -04:00
Luke Parker
253cf3253d Correct hash for 1.79.0-slim-bookworm docker image 2024-06-13 19:00:01 -04:00
Luke Parker
03445b3020 Update httparse, as 1.9.2 was yanked 2024-06-13 16:49:58 -04:00
Luke Parker
9af111b4aa Rust 1.79, cargo update 2024-06-13 15:57:08 -04:00
Luke Parker
41ce5b1738 Use the serai_abi::Call in the actual Transaction type
We prior required they had the same encoding, yet this ensures they do by
making them one and the same. This does require an large, ugly, From/TryInto
block which is deemed preferable for moving this more and more into syntax
(from semantics).

Further improvements (notably re: Extra) is possible, and this already lets us
strip some members from the Call enum.
2024-06-03 23:38:22 -04:00
Luke Parker
2a05cf3225 June 2024 nightly update
Replaces #571.
2024-06-01 21:46:49 -04:00
Luke Parker
f4147c39b2 bitcoin 0.32.1 2024-05-31 01:02:43 -04:00
rlking
cd69f3b9d6 Check if wasm was built by container exit code and state instead of local mountpoint (#570)
* Check if the serai wasm was built successfully by verifying the build container's status code and state, instead of checking the volume mountpoint locally

* Use a log statement for which wasm is used

* Minor typo fix

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-05-25 20:33:23 -04:00
Luke Parker
1d2beb3ee4 Ethereum relayer server
Causes send test to pass for the processor.
2024-05-22 18:50:11 -04:00
Luke Parker
ac709b2945 Correct processor docker tests encoding of Bitcoin addresses in OutInstructions 2024-05-21 08:49:57 -04:00
Luke Parker
a473800c26 More aggressive cargo update
Adds a few deps which are fine. Patches an old parking_lot(_core) version.
2024-05-21 08:07:32 -04:00
Luke Parker
09aac20293 Set the BufReader capacity to 0
Fixes issues with bitcoin.

We only use a BufReader as it's the only way to use a std::io::Read generic as
a bitcoin::io::Read object.
2024-05-21 07:06:13 -04:00
Luke Parker
f93214012d Use ScriptBuf over Address where possible 2024-05-21 06:44:59 -04:00
Luke Parker
400319cd29 cargo update
Also updates our gems
2024-05-21 06:09:04 -04:00
Luke Parker
a0a7d63dad bitcoin 0.32 2024-05-21 05:27:01 -04:00
Luke Parker
fb7d12ee6e Short-circuit test_no_deadlock_in_multisig_completed if preconditions not met 2024-05-21 03:20:44 -04:00
Luke Parker
11ec9e3535 Ethereum processor docker tests, barring send
We need the TX publication relay thingy for send to work (though that is the
point the test fails at).
2024-05-21 00:29:33 -04:00
Luke Parker
ae8a27b876 Add our own alloy meta module to deduplicate alloy prefixes 2024-05-14 01:42:18 -04:00
Luke Parker
af79586488 Fill out Ethereum functions in the processor Docker tests 2024-05-14 01:33:55 -04:00
Luke Parker
d27d93480a Get processor signer/wallet tests working for Ethereum
They are handicapped by the fact Ethereum self-sends don't show up as outputs,
yet that's fundamental (unless we add a *harmful* fallback function).
2024-05-11 00:11:14 -04:00
Luke Parker
02c4417a46 Update no_deadlock_in_multisig test to set the initial key in the DB 2024-05-10 15:57:05 -04:00
Luke Parker
79a79db399 Update dockertest specification 2024-05-10 15:50:07 -04:00
Luke Parker
0c9dd5048e Processor scanner tests for Ethereum 2024-05-10 14:06:43 -04:00
Luke Parker
5501de1f3a Update to the latest alloy
Also makes various tweaks as necessary.
2024-05-10 14:06:43 -04:00
GitHub Actions
21123590bb Update nightly 2024-05-01 01:10:58 -04:00
Luke Parker
bc1dec7991 Move TRANSACTION_MESSAGE to 1 2024-04-28 04:04:53 -04:00
Luke Parker
cef63a631a Add a dev ethereum Docker setup
Also adds untested Dockerfiles for reth, lighthouse, and nimbus.
2024-04-24 09:30:54 -04:00
Luke Parker
d57fef8999 Slight documentation tweaks 2024-04-24 03:55:23 -04:00
Luke Parker
d1474e9188 Route top-level transfers through to the processor 2024-04-24 03:38:31 -04:00
Luke Parker
b39c751403 Reduce target peers a bit 2024-04-23 12:59:45 -04:00
Luke Parker
cc7202e0bf Correct recv to try_recv when exhausting channel 2024-04-23 12:40:21 -04:00
Luke Parker
19e68f7f75 Correct selection of to-try peers to prevent infinite loops when to-try < target 2024-04-23 12:04:30 -04:00
Luke Parker
d94c9a4a5e Use a constant for the target amount of peer 2024-04-23 11:59:51 -04:00
Luke Parker
43dc036660 Use a HashSet for which networks to try peer finding for
Prevents a flood of retries from individually failed attempts within a batch of
peer connection attempts.
2024-04-23 10:55:56 -04:00
Luke Parker
95591218bb Remove cbor 2024-04-23 07:01:07 -04:00
Luke Parker
7dd587a864 Inline broadcast_raw now that it doesn't have multiple callers 2024-04-23 06:44:21 -04:00
Luke Parker
023275bcb6 Properly diversify ReqResMessageKind/GossipMessageKind 2024-04-23 06:37:41 -04:00
Luke Parker
8cef9eff6f Move keep alive, heartbeat, block to request/response 2024-04-23 05:44:58 -04:00
Luke Parker
b5e22dca8f Correct no-std Monero after moving from ToString to Display 2024-04-23 05:25:08 -04:00
Luke Parker
a41329c027 Update clippy now that redundant imports has been reverted 2024-04-23 04:31:27 -04:00
Luke Parker
a25e6330bd Remove DLEq proofs from CLSAG multisig
1) Removes the key image DLEq on the Monero side of things, as the produced
   signature share serves as a DLEq for it.
2) Removes the nonce DLEqs from modular-frost as they're unnecessary for
   monero-serai. Updates documentation accordingly.

Without the proof the nonces are internally consistent, the produced signatures
from modular-frost can be argued as a batch-verifiable CP93 DLEq (R0, R1, s),
or as a GSP for the CP93 DLEq statement (which naturally produces (R0, R1, s)).

The lack of proving the nonces consistent does make the process weaker, yet
it's also unnecessary for the class of protocols this is intended to service.
To provide DLEqs for the nonces would be to provide PoKs for the nonce
commitments (in the traditional Schnorr case).
2024-04-21 23:01:32 -04:00
Luke Parker
558a2bfa46 Slight tweaks to BP+ 2024-04-21 21:51:44 -04:00
Luke Parker
c73acb3d62 Log on new tendermint message debug -> trace 2024-04-21 19:28:21 -04:00
Luke Parker
933b17aa91 Revert coordinator/tributary to fd4f247917
\#560 is causing notable CI failures, with its logs including slashes at 10x
the prior rate.
2024-04-21 10:16:12 -04:00
Luke Parker
5fa7e3d450 Line for prior commit 2024-04-21 08:55:29 -04:00
Luke Parker
749d783b1e Comment the insanely aggressive timeout future trace log 2024-04-21 08:53:35 -04:00
Luke Parker
5a3ea80943 Add missing continue to prevent dialing a node we're connected to 2024-04-21 08:36:52 -04:00
Luke Parker
fddbebc7c0 Replace expect with debug log 2024-04-21 08:02:34 -04:00
Luke Parker
e01848aa9e Correct boolean NOT on is_fresh_dial 2024-04-21 07:30:31 -04:00
Luke Parker
320b5627b5 Retry if initial dials fail, not just upon disconnect 2024-04-21 07:26:16 -04:00
Luke Parker
be7780e69d Restart coordinator peer finding upon disconnections 2024-04-21 07:02:49 -04:00
Luke Parker
0ddbaefb38 Correct timing around when we verify precommit signatures 2024-04-21 06:12:01 -04:00
Luke Parker
0f0db14f05 Ethereum Integration (#557)
* Clean up Ethereum

* Consistent contract address for deployed contracts

* Flesh out Router a bit

* Add a Deployer for DoS-less deployment

* Implement Router-finding

* Use CREATE2 helper present in ethers

* Move from CREATE2 to CREATE

Bit more streamlined for our use case.

* Document ethereum-serai

* Tidy tests a bit

* Test updateSeraiKey

* Use encodePacked for updateSeraiKey

* Take in the block hash to read state during

* Add a Sandbox contract to the Ethereum integration

* Add retrieval of transfers from Ethereum

* Add inInstruction function to the Router

* Augment our handling of InInstructions events with a check the transfer event also exists

* Have the Deployer error upon failed deployments

* Add --via-ir

* Make get_transaction test-only

We only used it to get transactions to confirm the resolution of Eventualities.
Eventualities need to be modularized. By introducing the dedicated
confirm_completion function, we remove the need for a non-test get_transaction
AND begin this modularization (by no longer explicitly grabbing a transaction
to check with).

* Modularize Eventuality

Almost fully-deprecates the Transaction trait for Completion. Replaces
Transaction ID with Claim.

* Modularize the Scheduler behind a trait

* Add an extremely basic account Scheduler

* Add nonce uses, key rotation to the account scheduler

* Only report the account Scheduler empty after transferring keys

Also ban payments to the branch/change/forward addresses.

* Make fns reliant on state test-only

* Start of an Ethereum integration for the processor

* Add a session to the Router to prevent updateSeraiKey replaying

This would only happen if an old key was rotated to again, which would require
n-of-n collusion (already ridiculous and a valid fault attributable event). It
just clarifies the formal arguments.

* Add a RouterCommand + SignMachine for producing it to coins/ethereum

* Ethereum which compiles

* Have branch/change/forward return an option

Also defines a UtxoNetwork extension trait for MAX_INPUTS.

* Make external_address exclusively a test fn

* Move the "account" scheduler to "smart contract"

* Remove ABI artifact

* Move refund/forward Plan creation into the Processor

We create forward Plans in the scan path, and need to know their exact fees in
the scan path. This requires adding a somewhat wonky shim_forward_plan method
so we can obtain a Plan equivalent to the actual forward Plan for fee reasons,
yet don't expect it to be the actual forward Plan (which may be distinct if
the Plan pulls from the global state, such as with a nonce).

Also properly types a Scheduler addendum such that the SC scheduler isn't
cramming the nonce to use into the N::Output type.

* Flesh out the Ethereum integration more

* Two commits ago, into the **Scheduler, not Processor

* Remove misc TODOs in SC Scheduler

* Add constructor to RouterCommandMachine

* RouterCommand read, pairing with the prior added write

* Further add serialization methods

* Have the Router's key included with the InInstruction

This does not use the key at the time of the event. This uses the key at the
end of the block for the event. Its much simpler than getting the full event
streams for each, checking when they interlace.

This does not read the state. Every block, this makes a request for every
single key update and simply chooses the last one. This allows pruning state,
only keeping the event tree. Ideally, we'd also introduce a cache to reduce the
cost of the filter (small in events yielded, long in blocks searched).

Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of
our Plans should solely have payments out, and there's no expectation of a Plan
being made under one key broken by it being received by another key.

* Add read/write to InInstruction

* Abstract the ABI for Call/OutInstruction in ethereum-serai

* Fill out signable_transaction for Ethereum

* Move ethereum-serai to alloy

Resolves #331.

* Use the opaque sol macro instead of generated files

* Move the processor over to the now-alloy-based ethereum-serai

* Use the ecrecover provided by alloy

* Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized)

* Always use the latest keys for SC scheduled plans

* get_eventuality_completions for Ethereum

* Finish fleshing out the processor Ethereum integration as needed for serai-processor tests

This doesn't not support any actual deployments, not even the ones simulated by
serai-processor-docker-tests.

* Add alloy-simple-request-transport to the GH workflows

* cargo update

* Clarify a few comments and make one check more robust

* Use a string for 27.0 in .github

* Remove optional from no-longer-optional dependencies in processor

* Add alloy to git deny exception

* Fix no longer optional specification in processor's binaries feature

* Use a version of foundry from 2024

* Correct fetching Bitcoin TXs in the processor docker tests

* Update rustls to resolve RUSTSEC warnings

* Use the monthly nightly foundry, not the deleted daily nightly
2024-04-21 06:02:12 -04:00
Luke Parker
43083dfd49 Remove redundant log from tendermint lib 2024-04-21 05:32:41 -04:00
Luke Parker
523d2ac911 Rewrite tendermint's message handling loop to much more clearly match the paper (#560)
* Rewrite tendermint's message handling loop to much more clearly match the paper

No longer checks relevant branches upon messages, yet all branches upon any
state change. This is slower, yet easier to review and likely without one or
two rare edge cases.

When reviewing, please see page 5 of https://arxiv.org/pdf/1807.04938.pdf.
Lines from the specified algorithm can be found in the code by searching for
"// L".

* Sane rebroadcasting of consensus messages

Instead of broadcasting the last n messages on the Tributary side of things, we
now have the machine rebroadcast the message tape for the current block.

* Only rebroadcast messages which didn't error in some way

* Only rebroadcast our own messages for tendermint
2024-04-21 05:30:31 -04:00
Luke Parker
fd4f247917 Correct log which didn't work as intended 2024-04-20 19:54:16 -04:00
Luke Parker
ac9e356af4 Correct log targets in tendermint-machine 2024-04-20 19:15:15 -04:00
Luke Parker
bba7d2a356 Better logs in tendermint-machine 2024-04-20 18:13:44 -04:00
Luke Parker
4c349ae605 Redo how tendermint-machine checks if messages were prior sent
Instead of saving, for every sent message, if it was sent or not, we track the
latest block/round participated in. These two keys are comprehensive to all
prior block/rounds. We then use three keys for the latest round's
proposal/prevote/precommit, enabling tracking current state as necessary to
prevent equivocations with just 5 keys.

The storage of the latest three messages also enables proper rebroadcasting of
the current round (not implemented in this commit).
2024-04-20 18:10:51 -04:00
Luke Parker
a4428761f7 Bitcoin 27.0 2024-04-19 08:00:17 -04:00
Luke Parker
940e9553fd Add missing crates to GH workflows 2024-04-19 06:12:33 -04:00
Luke Parker
593aefd229 Extend time in sync test 2024-04-18 02:51:38 -04:00
Luke Parker
5830c2463d fmt 2024-04-18 02:03:28 -04:00
Luke Parker
bcc88c3e86 Don't broadcast added blocks
Online validators should inherently have them. Offline validators will receive
from the sync protocol.

This does somewhat eliminate the class of nodes who would follow the blockchain
(without validating it), yet that's fine for the performance benefit.
2024-04-18 01:48:11 -04:00
Luke Parker
fea16df567 Only reply to heartbeats after a certain distance 2024-04-18 01:39:34 -04:00
Luke Parker
4960c3222e Ensure we don't reply to stale heartbeats 2024-04-18 01:24:38 -04:00
Luke Parker
6b4df4f2c0 Only have some nodes respond to latent heartbeats
Also only respond if they're more than 2 blocks behind to minimize redundant
sending of blocks.
2024-04-17 21:54:10 -04:00
Luke Parker
dac46c8d7d Correct comment in VS pallet 2024-04-12 20:38:31 -04:00
expiredhotdog
db2e8376df use multiscalar_mul for CLSAG (#553)
* use multiscalar_mul for CLSAG

* use multiscalar_mul for CLSAG signing

* use OnceLock for basepoint precomputation
2024-04-12 19:52:56 -04:00
Luke Parker
33dd412e67 Add bootnode code prior used in testnet-internal (#554)
* Add bootnode code prior used in testnet-internal

Also performs the devnet/testnet differentation done since the testnet branch.

* Fixes

* fmt
2024-04-12 00:38:40 -04:00
Luke Parker
fcad402186 cargo update
Resolves deny error caused by h2.
2024-04-10 06:34:01 -04:00
Boog900
ab4d79628d fix CLSAG verification.
We were not setting c1 to the last calculated c during verification, instead keeping it set to the one provided in the signature.
2024-04-10 05:59:06 -04:00
Luke Parker
93be7a3067 Latest hyper-rustls, remove async-recursion
I didn't remove async-recursion when I updated the repo to 1.77 as I forgot we
used it in the tests. I still had to add some Box::pins, which may have been a
valid option, on the prior Rust version, yet at least resolves everything now.

Also updates everything which doesn't introduce further depends.
2024-03-27 00:17:04 -04:00
noot
63521f6a96 implement Router.sol and associated functions (#92)
* start Router contract

* use calldata for function args

* var name changes

* start testing router contract

* test with and without abi.encode

* cleanup

* why tf isn't tests/utils working

* cleanup tests

* remove unused files

* wip

* fix router contract and tests, add set/update public keys funcs

* impl some Froms

* make execute non-reentrant

* cleanup

* update Router to use ReentrancyGuard

* update contract to use errors, use bitfield in Executed event, minor other fixes

* wip

* fix build issues from merge, tests ok

* Router.sol cleanup

* cleanup, uncomment stuff

* bump ethers.rs version to latest

* make contract functions take generic middleware

* update build script to assert no compiler errors

* hardcode pubkey parity into contract, update tests

* Polish coins/ethereum in various ways

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-03-24 09:00:54 -04:00
Luke Parker
3d855c75be Create group before adding to it 2024-03-24 00:18:40 -04:00
Luke Parker
07df9aa035 Ensure user is in a group 2024-03-24 00:03:32 -04:00
Luke Parker
bc44fbdbac Add TODO to coordinator P2P 2024-03-23 23:32:21 -04:00
Luke Parker
4cacce5e55 Perform key share amortization on-chain to avoid discrepancies 2024-03-23 23:32:14 -04:00
Luke Parker
7408e26781 Don't regenerate infrastructure keys
Enables running setup without invalidating the message queue
2024-03-23 23:32:04 -04:00
Luke Parker
1f92e1cbda Fixes for prior commit 2024-03-23 23:31:55 -04:00
Luke Parker
333a9571b8 Use volumes for message-queue/processors/coordinator/serai 2024-03-23 23:31:44 -04:00
Luke Parker
b7d49af1d5 Track total peer count in the coordinator 2024-03-23 18:02:48 -04:00
Luke Parker
5ea3b1bf97 Use " " instead of "" for the empty key so sh doesn't interpret it as falsy 2024-03-23 17:38:50 -04:00
Luke Parker
2a31d8552e Add empty string for the KEY to serai-client to use the default keystore 2024-03-23 16:48:12 -04:00
Luke Parker
bca3728a10 Randomly select an addr from the authority discovery 2024-03-23 00:09:23 -04:00
Luke Parker
4914420a37 Don't add as an explicit peer if already connected 2024-03-22 23:51:51 -04:00
Luke Parker
f11a08c436 Peer finding which won't get stuck on one specific network 2024-03-22 23:47:43 -04:00
Luke Parker
35b58a45bd Split peer finding into a dedicated task 2024-03-22 23:40:15 -04:00
Luke Parker
af9b1ad5f9 Initial pruning of backlogged consensus messages 2024-03-22 23:18:53 -04:00
Luke Parker
e5afcda76b Explicitly use "" for KEY within the tests
Causes the provided keystore to be used over our keystore.
2024-03-22 23:05:40 -04:00
j-berman
08c7c1b413 monero: reference updated PR in fee test comment 2024-03-22 22:29:55 -04:00
Luke Parker
bdf5a66e95 Correct Serai key provision 2024-03-22 17:11:58 -04:00
Luke Parker
e861859dec Update EpochDuration in runtime 2024-03-22 16:18:01 -04:00
Luke Parker
6658d95c85 Extend orchestration as actually needed for testnet
Contains various bug fixes.
2024-03-22 16:15:26 -04:00
Luke Parker
2f07d04d88 Extend timeout for rebroadcast of consensus messages in coordinator 2024-03-22 16:06:31 -04:00
Luke Parker
e0259f2fe5 Add TODO re: Monero 2024-03-22 16:06:04 -04:00
Luke Parker
fab7a0a7cb Use the deterministically built wasm
Has the Dockerfile output to a volume. Has the node use the wasm from the
volume, if it exists.
2024-03-22 02:19:09 -04:00
Luke Parker
84cee06ac1 Rust 1.77 2024-03-21 20:09:33 -04:00
Luke Parker
c706d8664a Use OptimisticTransactionDb
Exposes flush calls.

Adds safety, at the cost of a panic risk, as multiple TXNs simultaneously
writing to a key will now cause a panic. This should be fine and the safety is
appreciated.
2024-03-20 23:42:40 -04:00
Luke Parker
1f2b9376f9 zstd 0.13 2024-03-20 21:53:57 -04:00
Luke Parker
13b147cbf6 Reduce coordinator tests contention re: cosign messages 2024-03-20 08:23:23 -04:00
Luke Parker
4a6496a90b Add slightly nicer formatting re: Protocol Changes doc 2024-03-12 00:59:51 -04:00
Luke Parker
9662d94bf9 Document the signals pallet in the user-facing docs 2024-03-12 00:56:06 -04:00
Luke Parker
233164cefd Flesh out docs more 2024-03-11 23:51:44 -04:00
Luke Parker
442d8c02fc Add docs, correct URL 2024-03-11 20:00:01 -04:00
Luke Parker
d1be9eaa2d Change baseurl to /docs 2024-03-11 18:02:54 -04:00
Luke Parker
c32d3413ba Add just-the-docs based user-facing documentation 2024-03-11 17:55:27 -04:00
Luke Parker
a3a009a7e9 Move docs to spec 2024-03-11 17:55:05 -04:00
Luke Parker
0889627e60 Typo fix for prior commit 2024-03-11 02:20:51 -04:00
Luke Parker
ace41c79fd Tidy the BlockHasEvents cache 2024-03-11 01:44:00 -04:00
Luke Parker
f7d16b3fc5 Fix 0 - 1 which caused a panic 2024-03-09 05:37:41 -05:00
Luke Parker
157acc47ca More aggresive WAL parameters 2024-03-09 05:05:43 -05:00
Luke Parker
ae0ecf9efe Disable jemalloc for rocksdb 0.22 to fix windows builds 2024-03-09 04:26:24 -05:00
Luke Parker
6374d9987e Correct how we save the block to scan from 2024-03-09 03:48:44 -05:00
Luke Parker
c93f6bf901 Replace yield_now with sleep 100 to prevent hammering a task, despite still being over-eager 2024-03-09 03:34:31 -05:00
Luke Parker
61a81e53e1 Further optimize cosign DB 2024-03-09 03:31:06 -05:00
Luke Parker
68dc872b88 sync every txn 2024-03-09 03:18:52 -05:00
Luke Parker
89b237af7e Correct the return value of block_has_events 2024-03-09 02:44:04 -05:00
Luke Parker
2347bf5fd3 Bound cosign work and ensure it progress forward even when cosigns don't occur
Should resolve the DB load observed on testnet.
2024-03-09 02:20:23 -05:00
Luke Parker
97f433c694 Redo how WAL/logs are limited by the DB
Adds a patch to the latest rocksdb.
2024-03-09 02:20:14 -05:00
Luke Parker
10f5ec51ca Explicitly limit RocksDB logs 2024-03-08 09:19:34 -05:00
Luke Parker
454bebaa77 Have the TendermintMachine domain-separate by genesis
Enbables support for multiple machines over the same DB.
2024-03-08 01:22:02 -05:00
Luke Parker
0d569ff7a3 cargo update
Resolves the current deny warning.
2024-03-07 23:23:48 -05:00
Luke Parker
480acfd430 Fix machete 2024-03-07 23:00:17 -05:00
Luke Parker
e266bc2e32 Stop validators from equivocating on reboot
Part of https://github.com/serai-dex/serai/issues/345.

The lack of full DB persistence does mean enough nodes rebooting at the same
time may cause a halt. This will prevent slashes.
2024-03-07 22:56:35 -05:00
Luke Parker
6c8a0bfda6 Limit docker logs to 300MB per container 2024-03-06 21:49:55 -05:00
Luke Parker
06c23368f2 Mitigate https://github.com/serai-dex/serai/issues/539 by making keystore deterministic 2024-03-06 21:37:40 -05:00
Luke Parker
5629c94b8b Reconcile the two copies of scalar_vector.rs in monero-serai 2024-03-02 17:15:16 -05:00
Luke Parker
b427f4b8ab Revert "Add bootnodes"
This reverts commit 1096ddb7ea.

This commit was intended for the testnet branch alone.
2024-02-26 10:47:47 -05:00
Luke Parker
1096ddb7ea Add bootnodes 2024-02-26 10:46:16 -05:00
Luke Parker
5487844b9e clippy && cargo update 2024-02-25 18:37:15 -05:00
akildemir
627e7e6210 Add validator set rotation test for the node side (#532)
* add node side unit test

* complete rotation test for all networks

* set up the fast-epoch docker file

* fix pr comments
2024-02-24 14:51:06 -05:00
Luke Parker
019b42c0e0 fmt/clippy fixes 2024-02-19 22:33:56 -05:00
Justin Berman
079fddbaa6 monero: only mask user features on new polyseed, not on decode (#503)
* monero: only mask user features on new polyseed, not on decode

- This commit ensures a polyseed string that has unsupported features correctly errors on decode (rather than panic in debug build or return an incorrect successful response in prod build)
- Also avoids panicking when checksum calculation is unexpectedly wrong

Polyseed reference impl for feature masking:
- polyseed_create: b7c35bb3c6/src/polyseed.c (L61)
- polyseed_decode: b7c35bb3c6/src/polyseed.c (L212)

* PR comments

* Make from_internal a member of Polyseed

* Add accidentally removed newline

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-19 22:03:02 -05:00
Justin Berman
92d8b91be9 Monero: fix decoy selection algo and add test for latest spendable (#384)
* Monero: fix decoy selection algo and add test for latest spendable

- DSA only selected coinbase outputs and didn't match the wallet2
implementation
- Added test to make sure DSA will select a decoy output from the
most recent unlocked block
- Made usage of "height" in DSA consistent with other usage of
"height" in Monero code (height == num blocks in chain)
- Rely on monerod RPC response for output's unlocked status

* xmr runner tests mine until outputs are unlocked

* fingerprintable canoncial select decoys

* Separate fingerprintable canonical function

Makes it simpler for callers who are unconcered with consistent
canonical output selection across multiple clients to rely on
the simpler Decoy::select and not worry about fingerprintable
canonical

* fix merge conflicts

* Put back TODO for issue #104

* Fix incorrect check on distribution len

The RingCT distribution on mainnet doesn't start until well after
genesis, so the distribution length is expected to be < height.

To be clear, this was my mistake from this series of changes
to the DSA. I noticed this mistake because the DSA would error
when running on mainnet.
2024-02-19 21:34:10 -05:00
Justin Berman
4f1f7984a6 monero: added tx extra variants padding and mysterious minergate (#510)
* monero: read/write tx extra padding

* monero: read/write tx extra mysterious minergate variant

* Clippy

* monero: add tx extra test for minergate + pub key

* BufRead

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-19 21:22:00 -05:00
Justin Berman
cda14ac8b9 monero: Use fee priority enums from monero repo CLI/RPC wallets (#499)
* monero: Use fee priority enums from monero repo CLI/RPC wallets

* Update processor for fee priority change

* Remove FeePriority::Default

Done in consultation with @j-berman.

The RPC/CLI/GUI almost always adjust up except barring very explicit commands,
hence why FeePriority 0 is now only exposed via the explicit command of
FeePriority::Custom { priority: 0 }.

Also helps with terminology.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-19 21:03:27 -05:00
Luke Parker
6f5d794f10 Median by Position (#533)
* use median price instead of the highest sustained

* add test for lexicographically reversing a byte slice

* fix pr comments

* fix CI fail

* fix dex tests

* Use a fuzz-tested list of prices

* Working median algorithm based on position + lints

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
2024-02-19 20:50:04 -05:00
j-berman
34b93b882c monero: scan all tx pub keys (not additional) for every tx
wallet2's behavior is explained more fully here:
https://github.com/UkoeHB/monero/issues/27
2024-02-19 20:48:37 -05:00
Justin Berman
0880453f82 monero: make dummy payment ID zeroes when it's included in a tx (#514)
* monero: make dummy payment ID zeroes when it's included in a tx

Also did some minor cleaning of InternalPayment::Change

* Lint

* Clarify comment
2024-02-19 20:45:50 -05:00
Justin Berman
ebdfc9afb4 monero: test xmr send that requires additional pub keys (#516)
* Test xmr send that requires additional pub keys

* Clippy
2024-02-19 20:18:31 -05:00
Luke Parker
f6409d08f3 Increase timeout in coordinator tests 2024-02-18 08:19:07 -05:00
Luke Parker
c41a8ac8f2 Revert "rocksdb 0.22 via a patch"
This reverts commit c05c511938.

rocksdb 0.22 does not work on Windows at this time.
2024-02-18 08:17:26 -05:00
akildemir
d88aa90ec2 support input encoded data for bitcoin network (#486)
* add input script check

* add test

* optimizations

* bug fix

* fix pr comments

* Test SegWit-encoded data using a single output (not two)

* Remove TODO used as a question, document origins when SegWit encoding

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-18 07:43:44 -05:00
Luke Parker
c05c511938 rocksdb 0.22 via a patch
patch is removeable once https://github.com/paritytech/parity-common/pull/828
is merged and released.
2024-02-18 05:32:04 -05:00
Justin Berman
df85c09435 monero: match monero's stricter check when decompressing points (#515)
* monero: match monero's stricter check when decompressing points

* Reverted type change for output key
2024-02-17 23:16:16 -05:00
Luke Parker
62a619a312 Have monerod be chown'd to monero:nogroup
On some Docker setups, the monero user doesn't have a monero group for some
reason. This handles that edge case.
2024-02-10 20:58:04 -05:00
Luke Parker
95b7460907 Use Debian instead of Alpine for monero on testnet 2024-02-10 20:57:55 -05:00
Luke Parker
95c3cfc52e Add restart policy to Docker containers 2024-02-09 08:43:33 -05:00
Luke Parker
f0694172ef Fix potential generation of invalid SignData in shim 2024-02-09 02:52:08 -05:00
Luke Parker
29633ada1b Rust 1.76 2024-02-09 02:51:24 -05:00
Luke Parker
337e54c672 Redo Dockerfile generation (#530)
Moves from concatted Dockerfiles to pseudo-templated Dockerfiles via a dedicated Rust program.

Removes the unmaintained kubernetes, not because we shouldn't have/use it, but because it's unmaintained and needs to be reworked before it's present again.

Replaces the compose with the work in the new orchestrator binary which spawns everything as expected. While this arguably re-invents the wheel, it correctly manages secrets and handles the variadic Dockerfiles.

Also adds an unrelated patch for zstd and simplifies running services a bit by greater utilizing the existing infrastructure.

---

* Delete all Dockerfile fragments, add new orchestator to generate Dockerfiles

Enables greater templating.

Also delete the unmaintained kubernetes folder *for now*. This should be
restored in the future.

* Use Dockerfiles from the orchestator

* Ignore Dockerfiles in the git repo

* Remove CI job to check Dockerfiles are as expected now that they're no longer committed

* Remove old Dockerfiles from repo

* Use Debian for monero-wallet-rpc

* Remove replace_cmds for proper usage of entry-dev

Consolidates ports a bit.

Updates serai-docker-tests from "compose" to "build".

* Only write a new dockerfile if it's distinct

Preserves the updated time metadata.

* Update serai-docker-tests

* Correct the path Dockerfiles are built from

* Correct inclusion of orchestration folder in Docker builds

* Correct debug/release flagging in the cargo command

Apparently, --debug isn't an effective NOP yet an error.

* Correct path used to run the Serai node within a Dockerfile

* Correct path in Monero Dockerfile

* Attempt storing monerod in /usr/bin

* Use sudo to move into /usr/bin in CI

* Correct 18.3.0 to 18.3.1

* Escape * with quotes

* Update deny.toml, ADD orchestration in runtime Dockerfile

* Add --detach to the Monero GH CI

* Diversify dockerfiles by network

* Fixes to network-diversified orchestration

* Bitcoin and Monero testnet scripts

* Permissions and tweaks

* Flatten scripts folders

* Add missing folder specification to Monero Dockerfile

* Have monero-wallet-rpc specify the monerod login

* Have the Docker CMD specify env variables inserted at time of Dockerfile generation

They're overrideable with the global enviornment as for tests. This enables
variable generation in orchestrator and output to productionized Docker files
without creating a life-long file within the Docker container.

* Don't add Dockerfiles into Docker containers now that they have secrets

Solely add the source code for them as needed to satisfy the workspace bounds.

* Download arm64 Monero on arm64

* Ensure constant host architecture when reproducibly building the wasm

Host architecture, for some reason, can effect the generated code despite the
target architecture always being foreign to the host architecture.

* Randomly generate infrastructure keys

* Have orchestrator generate a key, be able to create/start containers

* Ensure bash is used over sh

* Clean dated docs

* Change how quoting occurs

* Standardize to sh

* Have Docker test build the dev Dockerfiles

* Only key_gen once

* cargo update

Adds a patch for zstd and reconciles the breaking nightly change which just
occurred.

* Use a dedicated network for Serai

Also fixes SERAI_HOSTNAME passed to coordinator.

* Support providing a key over the env for the Serai node

* Enable and document running daemons for tests via serai-orchestrator

Has running containers under the dev network port forward the RPC ports.

* Use volumes for bitcoin/monero

* Use bitcoin's run.sh in GH CI

* Only use the volume for testnet (not dev)
2024-02-09 02:48:44 -05:00
akildemir
347d4cf413 Fix tendermint distinct precommit bug (#517)
* fix tendermint distinct precommit bug

* remove conflicting precommit error
2024-02-08 13:47:37 -05:00
Luke Parker
aaff74575f Remove unused brew packages on macOS (#531)
* Remove unused brew packages on macOS

* Remove reference to Docker in macOS CI

* Remove gems, explicitly test Intel and m1 macOS

* Allow gem to error since it still mostly runs
2024-02-05 23:53:57 -05:00
akildemir
ad0ecc5185 complete various todos in tributary (#520)
* complete various todos

* fix pr comments

* Document bounds on unique hashes in TransactionKind

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-05 03:50:55 -05:00
Luke Parker
af12cec3b9 cargo update
Resolves deny warning around unintended behavior change (without semver bump).
2024-02-04 03:50:48 -05:00
Luke Parker
89788be034 macOS clippy (#526)
* Specifically use bash as a shell to try and get rustup to work on Windows

* Use bash for the call to echo

* Add macOS clippy

* Debug why git diff failed

* Restore macos-latest to matrix

* Allow whitespace before the fact 0 lines were modified

* Add LC_ALL env variable to grep

* Replace usage of -P with -e
2024-02-01 21:31:02 -05:00
GitHub Actions
745075af6e Update nightly 2024-02-01 21:03:27 -05:00
Luke Parker
9b25d0dad7 Update to node 20 GitHub cache action 2024-02-01 20:52:17 -05:00
Luke Parker
2b76e41c9a Directly install protobuf-compiler without using an external action (#524)
* Directly install protobuf-compiler without using an external action

* Remove unused "github-token" input
2024-01-31 19:21:26 -05:00
Luke Parker
05219c3ce8 Windows Clippy (#525)
* Add windows clippy

* Adjust build-dependencies for Linux/Windows

* Specifically use bash as a shell to try and get rustup to work on Windows

* Use bash for the call to echo
2024-01-31 19:10:39 -05:00
Luke Parker
cc75b52a43 Don't allow constructing unusable serai_client::bitcoin::Address es 2024-01-31 17:54:43 -05:00
Luke Parker
4913873b10 Slash reports (#523)
* report_slashes plumbing in Substrate

Notably delays the SetRetired event until it provides a slash report or the set
after it becomes the set to report its slashes.

* Add dedicated AcceptedHandover event

* Add SlashReport TX to Tributary

* Create SlashReport TXs

* Handle SlashReport TXs

* Add logic to generate a SlashReport to the coordinator

* Route SlashReportSigner into the processor

* Finish routing the SlashReport signing/TX publication

* Add serai feature to processor's serai-client
2024-01-29 03:48:53 -05:00
rlking
0b8c7ade6e Add scripts to create monero wallet rpc container (#521)
* create Dockerfile for monero wallet rpc with dockerfiles.sh

* make monero wallet rpc docker accessible from outside

* connect wallet-rpc with monerod

* add generated Dockerfile for monero wallet rpc

* add monero wallet rpcs to docker profiles

* update getting started guide to refer to wallet rpc docker
2024-01-28 20:58:23 -05:00
Luke Parker
21262d41e6 Resolve latest clippy and a couple no longer needed fmt notes 2024-01-22 22:13:37 -05:00
Luke Parker
508f7eb23a cargo update
Pseudo-resolves shlex advisory (due to the deprecation of the vulnerable
functions, which hopefully should prevent their use). shlex is only used by
bindgen, a sufficiently trusted dependency.
2024-01-22 22:08:37 -05:00
Luke Parker
90df391170 cargo update
Resolves h2 disclosure (which shouldn't have affected us).
2024-01-19 11:44:49 -05:00
Luke Parker
9d3d47fc9f hyper-rustls 0.26 (#519)
* hyper-rustls 0.25

This isn't worth it until our dependencies update to rustls 0.22 as well.

* hyper-rustls 0.26 and hyper 1.0
2024-01-16 19:32:30 -05:00
akildemir
6691f16292 remove mach patch 2024-01-16 12:06:50 -05:00
Luke Parker
9c06cbccad Document immunity to https://github.com/paritytech/polkadot-sdk/issues/2947 now that I have permission to disclose it 2024-01-16 12:06:08 -05:00
Justin Berman
c507ab9fd6 monero: match varint decoding (#513)
* monero: match varint decoding

* Fix build and clippy
2024-01-11 03:15:11 -05:00
Luke Parker
3aa8007700 Add missing unwap to processor's test fn 2024-01-06 01:01:19 -05:00
Luke Parker
1ba2d8d832 Make monero-serai Block::number not panic on invalid blocks 2024-01-06 00:03:14 -05:00
Boog900
e7b0ed3e7e Check miner tx has a miner input when deserializing. 2024-01-05 23:49:43 -05:00
Luke Parker
f3429ec1ef Inside publish (for a Serai transaction from the coordinator), use RetiredDb over latest session
Not only is this more performant, the definition of retired won't be if a newer
session is active. It will be if the session has posted a slash report or the
stake for that session has unlocked.

Initial commit towards implementing SlashReports.
2024-01-05 23:40:15 -05:00
Luke Parker
1cff9b4264 Patch proc-macro-crate 2 to proc-macro-crate 3
Updates toml_edit to 0.21.
2024-01-05 23:40:15 -05:00
j-berman
3c5a82e915 monero: investigated TODO and can remove it
The behavior appears to match monero core. monero core isn't
throwing an exception in the linked code, it's returning
boost::none (and logging an error) which is the same functional
behavior as finding that the output does not belong to the user.
2024-01-05 12:18:10 -05:00
Boog900
93e85c5ce6 Monero: use only the first input ring length for RCT deserialization. (#504)
* Use only the first input ring length for all RCT input signatures.

This is what Monero does:
ac02af9286/src/ringct/rctTypes.h (L422)

https://github.com/monero-project/monero/blob/master/src/cryptonote_basic/cryptonote_basic.h#L308-L309

This isn't an issue for current transactions as from hf 12 Monero requires
all inputs to have the same number of decoys but for transactions before
that Monero would reject RCT txs with differing ring lengths. Monero would
deserialize each inputs signature using the ring length of the first so the
signatures for inputs other than the first would have a different
(wrong) number of elements for that input meaning the signature is invalid.

But as we are using the ring length of each input, which arguably is the
*correct* way, we would approve of transactions with inputs differing in
ring lengths.

* Check that there is more than one ring member for MLSAG signatures.

ac02af9286/src/ringct/rctSigs.cpp (L462)
2024-01-05 00:02:16 -05:00
Luke Parker
617ec604ee cargo update
Resolves the deny CI failure.
2024-01-04 01:46:26 -05:00
Justin Berman
265261d3ba monero: require seed lang when decoding seed (#502)
* monero: require seed lang when decoding seed

- Require the seed language when decoding a Classic|Polyseed seed string
	- As per https://github.com/monero-project/monero/issues/9089 and https://github.com/tevador/polyseed/issues/11
	- Fixes #478
	- Implementation note: I reused the `SeedType` enum and required it as a param to `Seed::from_string` because it seemed simplest, but perhaps there is a cleaner way to require the seed lang.
- Made sure the print statements from #487 print the seed as early as possible to help debug future issues
- A future PR could support deducing which languages a seed decodes to in order to support the UX @kayabaNerve suggested in https://github.com/monero-project/monero/issues/9089:
	- "Wallets can also try to abstract [language specification], by decoding with all languages, and only asking the user if/when multiple valid options show up ("Is this seed Spanish or Italian?")."

* Lint
2024-01-04 01:32:42 -05:00
Luke Parker
7eb388e546 PR to track down CI failures (#501)
* Use an extended timeout for DKGs specifically

* Add a log statement when message-queue connection fails

* Add a 60 second keep-alive to connections

* Use zalloc for processor/message-queue/coordinator

An additional layer which protects us against edge cases with Zeroizing
(objects which don't support it or don't miss it).

* Add further logs to message-queue

* Further increase re-attempt timeouts in CI

* Remove misplaced continue inmessage-queue client

Fixes observed CI failures.

* Revert "Further increase re-attempt timeouts in CI"

This reverts commit 3723530cf6.
2024-01-04 01:08:13 -05:00
Luke Parker
6c8040f723 Restore release for serai-node to obtain sane bootup times 2023-12-30 23:59:00 -05:00
Luke Parker
02776c54a8 Increase reattempt delays in the GH CI, which is extremely latent 2023-12-30 22:11:04 -05:00
Luke Parker
ec8dfd4639 Correct SignData serialization test from creating 256 signers of data
This overflows the u8 allowed and caused a CI failure. The actual
code/assumption is fine.
2023-12-30 19:08:29 -05:00
Luke Parker
99e05e4e5e Add patches folder to runtime Dockerfile 2023-12-30 18:36:43 -05:00
Luke Parker
a72b547824 Add patches folder to Dockerfiles 2023-12-30 13:49:41 -05:00
Luke Parker
bad3d210ba rust 1.75 2023-12-30 03:26:32 -05:00
Luke Parker
8c676d98c5 Tweaks from cargo update and patches 2023-12-30 03:26:11 -05:00
Luke Parker
890b70212a Patch matches, mach 2023-12-30 02:52:05 -05:00
Luke Parker
9f7140c3db Patch is-terminal to the std-included IsTerminal 2023-12-30 02:48:26 -05:00
Luke Parker
8b26a85faa Add patches for directories-next/option-ext
The rational is detailed in the root Cargo.toml.

While I don't personally mind MPL dependencies, even if I don't prefer them
(they're allowed in the deny.toml for a reason), I do mind the pointless scope
creep and wish to highlight how little it actually used from the crate by
re-defining it as the single function.

We could also fork directories-next, or directories, and remove the usage of
option-ext per https://github.com/dirs-dev/dirs-sys-rs/issues/24, yet that'd be
a much larger task than what was done here.

In the future, it may be beneficial to submit a PR to wasmtime replacing
directories-next with home, a cargo-team maintained library to get the home
directory and associated folders. An example migration can be found at
https://github.com/harryfei/which-rs/pull/80.
2023-12-30 02:44:33 -05:00
Luke Parker
24ea65eae9 cargo update 2023-12-30 02:36:51 -05:00
Luke Parker
fff8dcb827 Document usage of latest_decided in AuthorityDiscoveryApi 2023-12-23 21:28:50 -05:00
Luke Parker
2b23252b4c Add derive feature to Zeroize in crypto/ciphersuite
It was missing.
2023-12-23 02:13:32 -05:00
Luke Parker
b493e3e31f Validator DHT (#494)
* Route validators for any active set through sc-authority-discovery

Additionally adds an RPC route to retrieve their P2P addresses.

* Have the coordinator get peers from substrate

* Have the RPC return one address, not up to 3

Prevents the coordinator from believing it has 3 peers when it has one.

* Add missing feature to serai-client

* Correct network argument in serai-client for p2p_validators call

* Add a test in serai-client to check DHT population with a much quicker failure than the coordinator tests

* Update to latest Substrate

Removes distinguishing BABE/AuthorityDiscovery keys which causes
sc_authority_discovery to populate as desired.

* Update to a properly tagged substrate commit

* Add all dialed to peers to GossipSub

* cargo fmt

* Reduce common code in serai-coordinator-tests with amore involved new_test

* Use a recursive async function to spawn `n` DockerTests with the necessary networking configuration

* Merge UNIQUE_ID and ONE_AT_A_TIME

* Tidy up the new recursive code in tests/coordinator

* Use a Mutex in CONTEXT to let it be set multiple times

* Make complimentary edits to full-stack tests

* Augment coordinator P2p connection logs

* Drop lock acquisitions before recursing

* Better scope lock acquisitions in full-stack, preventing a deadlock

* Ensure OUTER_OPS is reset across the test boundary

* Add cargo deny allowance for dockertest fork
2023-12-22 21:09:18 -05:00
Luke Parker
00774c29d7 Replace remaining direct uses of futures with futures_util
Slight downscope which helps combat the antipattern which is the futures glob
crate. While futures_util is still a large crate, it has better defaults and
is smaller by virtue of not pulling the executor.
2023-12-18 19:45:08 -05:00
Luke Parker
a4c82632fb Use pub(crate) for create_db items, not pub 2023-12-18 17:15:02 -05:00
Luke Parker
c8747e23c5 Remove offline participants from future DKG protocols so long as the threshold is met
Makes RemoveParticipantDueToDkg a voted-on event instead of a Provided.
This removes the requirement for offline parties to be able to fully validate
blame, yet unfortunately lets an dishonest supermajority have an honest node
label any arbitrary node as dishonest.

Corrects a variety of `.i(...)` calls which panicked when they shouldn't have.

Cleans up a couple no-longer-used storage values.
2023-12-18 17:14:51 -05:00
Luke Parker
008da698bc Correct the nightly version, again
Turns out 12-04 has 12-02 in its `--version`, and accordingly 12-04 was the
intended pin.
2023-12-17 05:22:06 -05:00
Luke Parker
c2fffb9887 Correct a couple years of accumulated typos 2023-12-17 02:06:51 -05:00
Luke Parker
9c3329abeb Bump nightly version by a single day to resolve a clippy error 2023-12-17 01:51:52 -05:00
Luke Parker
065d314e2a Further expand clippy workspace lints
Achieves a notable amount of reduced async and clones.
2023-12-17 00:04:49 -05:00
Luke Parker
ea3af28139 Add workspace lints 2023-12-17 00:04:47 -05:00
akildemir
c40ce00955 Slash bad validators (#468)
* implement general design

* add slashing

* bug fixes

* fix pr comments

* misc fixes

* fix grandpa abi call type

* Correct rebase artifacts I introduced

* Cleanups and corrections

1) Uses vec![] for the OpaqueKeyProof as there's no value to passing it around
2) Remove usage of Babe/Grandpa Offences for tracking if an offence is known
   for checking if can slash. If can slash, no prior offence must have been
   known.
3) Rename DisabledIndices to SeraiDisabledIndices, drop historical data for
   current session only.
4) Doesn't remove from the pre-declared upcoming Serai set upon slash due to
   breaking light clients.
5) Into/From instead of AsRef for KeyOwnerProofSystem's generic to ensure
   safety of the conversion.

* Correct deduction from TotalAllocatedStake on slash

It should only be done if in set and only with allocations contributing to
TotalAllocatedStake (Allocation + latest session's PendingDeallocation).

* Changes meant for prior commit

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-16 17:44:08 -05:00
Luke Parker
74a68c6f68 cargo update
Resolves vulnerably in zerocopy.
2023-12-15 00:54:39 -05:00
Luke Parker
2532423d42 Remove the RemoveParticipant protocol for having new DKGs specify the participants which were removed
Obvious code cleanup is obvious.
2023-12-14 23:51:57 -05:00
Luke Parker
b60e3c2524 Replace PSTTrait and PstTxType with PublishSeraiTransaction 2023-12-14 16:06:08 -05:00
Luke Parker
77edd00725 Handle the combination of DKG removals with re-attempts
With a DKG removal comes a reduction in the amount of participants which was
ignored by re-attempts.

Now, we determine n/i based on the parties removed, and deterministically
obtain the context of who was removd.
2023-12-13 14:03:07 -05:00
hinto.janai
884b6a6fec seed: print seed info in tests 2023-12-13 10:18:34 -05:00
Luke Parker
6a172825aa Reattempts (#483)
* Schedule re-attempts and add a (not filled out) match statement to actually execute them

A comment explains the methodology. To copy it here:

"""
This is because we *always* re-attempt any protocol which had participation. That doesn't
mean we *should* re-attempt this protocol.

The alternatives were:
1) Note on-chain we completed a protocol, halting re-attempts upon 34%.
2) Vote on-chain to re-attempt a protocol.

This schema doesn't have any additional messages upon the success case (whereas
alternative #1 does) and doesn't have overhead (as alternative #2 does, sending votes and
then preprocesses. This only sends preprocesses).
"""

Any signing protocol which reaches sufficient participation will be
re-attempted until it no longer does.

* Have the Substrate scanner track DKG removals/completions for the Tributary code

* Don't keep trying to publish a participant removal if we've already set keys

* Pad out the re-attempt match a bit more

* Have CosignEvaluator reload from the DB

* Correctly schedule cosign re-attempts

* Actuall spawn new DKG removal attempts

* Use u32 for Batch ID in SubstrateSignableId, finish Batch re-attempt routing

The batch ID was an opaque [u8; 5] which also included the network, yet that's
redundant and unhelpful.

* Clarify a pair of TODOs in the coordinator

* Remove old TODO

* Final comment cleanup

* Correct usage of TARGET_BLOCK_TIME in reattempt scheduler

It's in ms and I assumed it was in s.

* Have coordinator tests drop BatchReattempts which aren't relevant yet may exist

* Bug fix and pointless oddity removal

We scheduled a re-attempt upon receiving 2/3rds of preprocesses and upon
receiving 2/3rds of shares, so any signing protocol could cause two re-attempts
(not one more).

The coordinator tests randomly generated the Batch ID since it was prior an
opaque byte array. While that didn't break the test, it was pointless and did
make the already-succeeded check before re-attempting impossible to hit.

* Add log statements, correct dead-lock in coordinator tests

* Increase pessimistic timeout on recv_message to compensate for tighter best-case timeouts

* Further bump timeout by a minute

AFAICT, GH failed by just a few seconds.

This also is worst-case in a single instance, making it fine to be decently long.

* Further further bump timeout due to lack of distinct error
2023-12-12 12:28:53 -05:00
Luke Parker
b297b79f07 Bitcoin 26.0
Also uses `uname -m` to decide what platform to download the binary for.
2023-12-12 09:56:30 -05:00
Luke Parker
3cf46338ee Have Bitcoin's send_raw_transaction considered succeeded if already sent 2023-12-12 01:05:44 -05:00
Luke Parker
2d1443eb8a Add machete CI job now that machete allows whitelisting false positives 2023-12-11 23:06:44 -05:00
Luke Parker
4de4c186b1 Remove redundant fields from dex-pallet, add cargo machete ignores 2023-12-11 07:47:23 -05:00
Luke Parker
11fdb6da1d Coordinator Cleanup (#481)
* Move logic for evaluating if a cosign should occur to its own file

Cleans it up and makes it more robust.

* Have expected_next_batch return an error instead of retrying

While convenient to offer an error-free implementation, it potentially caused
very long lived lock acquisitions in handle_processor_message.

* Unify and clean DkgConfirmer and DkgRemoval

Does so via adding a new file for the common code, SigningProtocol.

Modifies from_cache to return the preprocess with the machine, as there's no
reason not to. Also removes an unused Result around the type.

Clarifies the security around deterministic nonces, removing them for
saved-to-disk cached preprocesses. The cached preprocesses are encrypted as the
DB is not a proper secret store.

Moves arguments always present in the protocol from function arguments into the
struct itself.

Removes the horribly ugly code in DkgRemoval, fixing multiple issues present
with it which would cause it to fail on use.

* Set SeraiBlockNumber in cosign.rs as it's used by the cosigning protocol

* Remove unnecessary Clone from lambdas in coordinator

* Remove the EventDb from Tributary scanner

We used per-Transaction DB TXNs so on error, we don't have to rescan the entire
block yet only the rest of it. We prevented scanning multiple transactions by
tracking which we already had.

This is over-engineered and not worth it.

* Implement borsh for HasEvents, removing the manual encoding

* Merge DkgConfirmer and DkgRemoval into signing_protocol.rs

Fixes a bug in DkgConfirmer which would cause it to improperly handle indexes
if any validator had multiple key shares.

* Strictly type DataSpecification's Label

* Correct threshold_i_map_to_keys_and_musig_i_map

It didn't include the participant's own index and accordingly was offset.

* Create TributaryBlockHandler

This struct contains all variables prior passed to handle_block and stops them
from being passed around again and again.

This also ensures fatal_slash is only called while handling a block, as needed
as it expects to operate under perfect consensus.

* Inline accumulate, store confirmation nonces with shares

Inlining accumulate makes sense due to the amount of data accumulate needed to
be passed.

Storing confirmation nonces with shares ensures that both are available or
neither. Prior, one could be yet the other may not have been (requiring an
assert in runtime to ensure we didn't bungle it somehow).

* Create helper functions for handling DkgRemoval/SubstrateSign/Sign Tributary TXs

* Move Label into SignData

All of our transactions which use SignData end up with the same common usage
pattern for Label, justifying this.

Removes 3 transactions, explicitly de-duplicating their handlers.

* Remove CurrentlyCompletingKeyPair for the non-contextual DkgKeyPair

* Remove the manual read/write for TributarySpec for borsh

This struct doesn't have any optimizations booned by the manual impl. Using
borsh reduces our scope.

* Use temporary variables to further minimize LoC in tributary handler

* Remove usage of tuples for non-trivial Tributary transactions

* Remove serde from dkg

serde could be used to deserialize intenrally inconsistent objects which could
lead to panics or faults.

The BorshDeserialize derives have been replaced with a manual implementation
which won't produce inconsistent objects.

* Abstract Future generics using new trait definitions in coordinator

* Move published_signed_transaction to tributary/mod.rs to reduce the size of main.rs

* Split coordinator/src/tributary/mod.rs into spec.rs and transaction.rs
2023-12-10 20:21:44 -05:00
Luke Parker
6caf45ea1d Downscope usage of futures 2023-12-10 19:32:52 -05:00
hinto.janai
32bea92742 message-queue: remove (*) 2023-12-08 10:47:42 -05:00
Luke Parker
7122e0faf4 Cache the block's events within TemporalSerai
Event retrieval was prior:
- Retrieve all events in the block, which may be hundreds of KB
- Filter to just a few

Since it's frequent to want multiple sets of events, each filtered in their own
way, this caused the retrieval to happen multiple times. Now, it only will
happen once.

Also has the scoped clients take a reference, not an owned TemporalSerai.
2023-12-08 10:46:10 -05:00
Justin Berman
397fca748f monero-serai: make it clear that not providing a change address is fingerprintable (#472)
* Make it clear not providing a change address is fingerprintable

When no change address is provided, all change is shunted to the
fee. This PR makes it clear to the caller that it is fingerprintable
when the caller does this.

* Review comments
2023-12-08 07:42:02 -05:00
David Bell
16b22dd105 Convert coordinator/substrate/db to use create_db macro (#436)
* chore: implement create_db for substrate (fix broken branch)

* Correct rebase artifacts

* chore: remove todo statement

* chore: rename BlockDb to NextBlock

* chore: return empty tuple instead of empty array for event storage

* Finish rebasing

* .Minor tweaks to remove leftover variables

These may be rebase artifacts.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-08 05:12:16 -05:00
hinto.janai
a6947d6d21 bulletproofs: avoid mut 2023-12-08 04:30:22 -05:00
Luke Parker
5c047ebe74 Log the reason for yielding BlockError::Fatal to Tendermint from the Tributary 2023-12-07 09:30:25 -05:00
econsta
91a024e119 coordinator/src/db.rs db macro implimentation (#431)
* coordinator/src/db.rs db macro implimentation

* fixed fmt errors

* converted txn functions to get/set counterparts

* use take_signed_transaction function

* fix for two fo the tests

* Misc tweaks

* Minor tweaks

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-07 09:30:11 -05:00
Luke Parker
c511a54d18 Move serai-client off serai-runtime, MIT licensing it
Uses a full-fledged serai-abi to do so.

Removes use of UncheckedExtrinsic as a pointlessly (for us) length-prefixed
block with a more complicated signing algorithm than advantageous.

In the future, we should considering consolidating the various primitives
crates. I'm not convinced we benefit from one primitives crate per pallet.
2023-12-07 02:30:09 -05:00
Luke Parker
6416e0079b Add ABI crate
Call and Event are both from the pallets, which are AGPL licensed. Accordingly,
they make serai-client AGPL licensed when serai-client must end up MIT
licensed. This creates a MIT-licensed variant of Calls and Events such that
they can be used by serai-client, enabling transitioning it to MIT.

Relevant to https://github.com/serai-dex/serai/issues/337.
2023-12-06 09:56:43 -05:00
Luke Parker
7768ea90ad signals-primitives, plus various minor tweaks 2023-12-06 09:53:06 -05:00
Luke Parker
6e15bb2434 Correct the LICENSE files in serai-primitives and validator-sets-primitives
While properly tagged with the MIT license in Cargo.toml, they had AGPL files.
2023-12-06 07:10:36 -05:00
Luke Parker
d1d5ee6b3d Make InSet a double map
Reduces amount of code, allows removing the custom iter for PrefixIterator.
2023-12-06 05:39:00 -05:00
Luke Parker
82f7342372 cargo update
Updates off a yanked version of zerocopy, fixing the failing deny CI.

Bites the bullet on windows-sys 0.52. While I was hoping to update everything
at once, unfortunately tokio won't update until March (see
https://github.com/tokio-rs/mio/pull/1725). I don't want to withold these
updates for that long.
2023-12-06 05:01:08 -05:00
Luke Parker
3a6c7ad796 Use TX IDs for Bitcoin Eventualities
They're a bit more binding, smaller, provided by the Rust bitcoin library,
sane, and we don't have to worry about malleability since all of our inputs are
SegWit.
2023-12-06 04:37:11 -05:00
Luke Parker
62fa31de07 If the pool has yet to start, insert a price of 0
The test failures were caused by not inserting any price, causing the first
price to immediately become the oraclized price. While that's not inherently
invalid, suggesting the tests should've been the ones updated, it opens an
exploit where whoever first adds liquidity has the opportunity to set a
ridiculous price and DoS the set. Not oraclizing until we have an entire
period, achieved by inserting 0s during the initial blocks, ensures an open
launch for such discovery.
2023-12-05 12:29:36 -05:00
Luke Parker
095ac50ba7 Correct div by 0 I introduced 2023-12-05 10:35:27 -05:00
Luke Parker
8cc0adf281 Don't allow immediate deallocations for active validators even if the key shares remain the same
There's an exploit where the prior set improperly mints coins, the new set
occurs (resetting the oracle), and they immediately deallocate 49.9% of their
coins (which is more than enough to achieve profitability).

Now, anyone in set must wait until after the next set completes to perform any
deallocation, enabling time to halt upon improper mints.
2023-12-05 09:36:41 -05:00
Luke Parker
91905284bf Have the CI check the lockfile isn't stale
Prevents a commit review from passing, yet then the next commit 'adding' 100
new dependencies.
2023-12-05 09:13:48 -05:00
akildemir
4ebfae0b63 Ensure economic security on validator sets (#459)
* add price oracle

* tidy up

* add todo

* bug fixes

* fix pr comments

* Use spot price, tweak some formulas

Also cleans nits.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-05 08:52:50 -05:00
Luke Parker
746bf5c6ad Rebuild Dockerfiles 2023-12-05 04:51:06 -05:00
Luke Parker
6e9ce3ac4f Pin mimalloc to the commit hash for 2.1.2 2023-12-05 03:29:13 -05:00
Luke Parker
797ed49e7b DKG Removals (#467)
* Update ValidatorSets with a remove_participant call

* Add DkgRemoval, a sign machine for producing the relevant MuSig signatures

* Don't use position-dependent u8s yet Public when removing validators from the DKG

* Add DkgRemovalPreprocess, DkgRemovalShares

Implementation is via a new publish_tributary_tx lambda.

This is code is a copy-pasted mess which will need to be cleaned up.

* Only allow non-removed validators to vote for removals

Otherwise, it's risked that the remaining validators fall below 67% of the
original set.

* Correct publish_serai_tx, which was prior publish_set_keys in practice
2023-12-04 07:04:44 -05:00
Luke Parker
99c6375605 fmt 2023-12-03 00:06:13 -05:00
Luke Parker
6e8a5f9cb1 cargo update, remove unneeded dependencies from the processor 2023-12-03 00:05:03 -05:00
Luke Parker
4446a369b1 Remove old TODO 2023-12-03 00:04:58 -05:00
Luke Parker
ce038972df Use as_slice instead of as_ref
I don't know why this didn't trigger for me. Potentially a difference in the
month of the nightly clippy?
2023-12-03 00:04:58 -05:00
Luke Parker
2f6fb93f87 Bridge the gap between the prior two commits 2023-12-03 00:04:58 -05:00
Luke Parker
1e6cb8044c Domain Separate the coordinator's tributary transaction hashes 2023-12-03 00:04:58 -05:00
Luke Parker
1ca66b846a Use multiple nonces in the Tributary 2023-12-03 00:04:58 -05:00
Luke Parker
c82d1283af Move the coordinator to expect multiple nonces
This mirrors how Provided TXs handle topics.

Now, instead of managing a global nonce stream, we can use items such as plan
IDs as topics.

This massively benefits re-attempts, as else we'd need a NOP TX to clear unused
nonces.
2023-12-03 00:04:58 -05:00
Luke Parker
1c3e8af922 Use proper English in Monero RPC constructor 2023-12-02 00:46:34 -05:00
Boog900
28c2b61933 make Monero HTTP RPC timeout configurable. 2023-12-01 23:34:56 -05:00
GitHub Actions
cb1ef0d71f Update nightly 2023-12-01 00:24:51 -05:00
Luke Parker
b823413c9b Use parity-db in current Dockerfiles (#455)
* Use redb and in Dockerfiles

The motivation for redb was to remove the multiple rocksdb compile times from
CI.

* Correct feature flagging of coordinator and message-queue in Dockerfiles

* Correct message-queue DB type alias

* Use consistent table typing in redb

* Correct rebase artifacts

* Correct removal of binaries feature from message-queue

* Correct processor feature flagging

* Replace redb with parity-db

It still has much better compile times yet doesn't block when creating multiple
transactions. It also is actively maintained and doesn't grow our tree. The MPT
aspects are irrelevant.

* Correct stray Redb

* clippy warning

* Correct txn get
2023-11-30 04:22:37 -05:00
Luke Parker
d1122a6535 Fix no-std builds 2023-11-29 03:20:49 -05:00
Luke Parker
8040fedddf Have simple-request Response's borrow the Client to ensure it's not prematurely dropped 2023-11-29 01:16:18 -05:00
Luke Parker
51bb434239 Properly include the non-JSON HTTP result in Monero's RpcError
The CI for 695d1f0ecf actually errored with a
non-JSON response, hence the value in this.
2023-11-29 00:43:59 -05:00
Luke Parker
f0ff3a18d2 Use debug builds in our Dockerfiles to reduce CI times (#462)
* Use debug builds in our Dockerfiles to reduce CI times

Also enables only spawning the mdns service when debug in the coordinator.

* Correct underflow in processor

Prior undetected due to relase builds not having bounds checks enabled.

* Restore Serai release due to CI/RPC failures caused by compiling it in debug mode

This is *probably* worth an issue filed upstream, if it can be tracked down.

* Correct failing debug asserts in Monero

These debug asserts assumed there was a change address to take the remainder.
If there's no change address, the remainder is shunted to the fee, causing the
fee to be distinct from the estimate.

We presumably need to modify monero-serai such that change: None isn't valid,
and users must use Change::Fingerprintable(None).
2023-11-29 00:24:37 -05:00
Luke Parker
695d1f0ecf Remove subxt (#460)
* Remove subxt

Removes ~20 crates from our Cargo.lock.

Removes downloading the metadata and enables removing the getMetadata RPC route
(relevant to #379).

Moves forward #337.

Done now due to distinctions in the subxt 0.32 API surface which make it
justifiable to not update.

* fmt, update due to deny triggering on a yanked crate

* Correct the handling of substrate_block_notifier now that it's ephemeral, not long-lived

* Correct URL in tests/coordinator from ws to http
2023-11-28 02:29:50 -05:00
Luke Parker
571195bfda Resolve #360 (#456)
* Remove NetworkId from processor-messages

Because intent binds to the sender/receiver, it's not needed for intent.

The processor knows what the network is.

The coordinator knows which to use because it's sending this message to the
processor for that network.

Also removes the unused zeroize.

* ProcessorMessage::Completed use Session instead of key

* Move SubstrateSignId to Session

* Finish replacing key with session
2023-11-26 12:14:23 -05:00
Luke Parker
b79cf8abde Move message-queue to a fully binary representation (#454)
* Move message-queue to a fully binary representation

Additionally adds a timeout to the message queue test.

* coordinator clippy

* Remove contention for the message-queue socket by using per-request sockets

* clippy
2023-11-26 11:22:18 -05:00
Luke Parker
c6c74684c9 Increase timeouts in processor tests
For some reason, these constantly failed for me while waiting for the key pair
to confirm. This adds a sleep during the mining process, to ensure blocks
actually have time between them, and mines several more blocks to handle the
median code recently added.
2023-11-25 04:09:07 -05:00
Luke Parker
de14687a0d Fix the processor's Monero time monotonicity
Monero doesn't assert the time increases with each block, solely that it
doesn't decrease. Now, the block number is added to the time to ensure it
increases.
2023-11-25 04:07:31 -05:00
Luke Parker
d60e007126 Add a binaries feature to the processor to reduce dependencies when used as a lib
processor isn't intended to be used as a library, yet serai-processor-tests
does pull it in as a lib. This caused serai-processor-tests to need to compile
rocksdb, which added multiple minutes to the compilation time.
2023-11-25 04:04:52 -05:00
Luke Parker
b296be8515 Replace bincode with borsh (#452)
* Add SignalsConfig to chain_spec

* Correct multiexp feature flagging for rand_core std

* Remove bincode for borsh

Replaces a non-canonical encoding with a canonical encoding which additionally
should be faster.

Also fixes an issue where we used bincode in transcripts where it cannot be
trusted.

This ended up fixing a myriad of other bugs observed, unfortunately.
Accordingly, it either has to be merged or the bug fixes from it must be ported
to a new PR.

* Make serde optional, minimize usage

* Make borsh an optional dependency of substrate/ crates

* Remove unused dependencies

* Use [u8; 64] where possible in the processor messages

* Correct borsh feature flagging
2023-11-25 04:01:11 -05:00
Luke Parker
6b2876351e Add file meant for prior commit 2023-11-24 21:41:59 -05:00
Luke Parker
0ea90d054d Enable the signals pallet to halt networks' publication of Batchs
Relevant to #394.

Prevents hand-over due to hand-over occurring via a `Batch` publication.

Expects a new protocol to restore functionality (after a retirement of the
current protocol).
2023-11-24 19:56:57 -05:00
Luke Parker
eb1d00aa55 Update TX format in coordinator test 2023-11-23 11:45:00 -05:00
Luke Parker
372149c2cc Various simplifications re: Serai transactions
Removes PairSigner for the pair directly.

Resets spec_version to 1.

Defines the extrinsic without its length prefix, only prefixing during publish.
2023-11-23 00:02:01 -05:00
Luke Parker
c6cf33e370 Install wasm toolchain before ADDing files to docker images
Enables caching the wasm toolchain.
2023-11-22 18:07:58 -05:00
Luke Parker
f58478ad87 Add hex as a dependency to serai-client 2023-11-22 18:06:10 -05:00
Luke Parker
88a1726399 Fixes for prior commit 2023-11-22 16:24:50 -05:00
Luke Parker
08e6669403 Replace substrate/client's use of Payload with usage of RuntimeCall
Gains explicit typing.
2023-11-22 11:23:04 -05:00
akildemir
fcfdadc791 Integrate session pallet into validator-sets pallet (#440)
* remove pallet-session

* Store key shares in InSet

* integrate grandpa to vs-pallet

* integrate pallet babe

* remove pallet-session & authority discovery from runtime

* update the grandpa pallet path

* cargo update grandpa

* cargo update substrate

* Misc tweaks

Sets validators for BABE/GRANDPA in chain_spec, per Akil's realization that was
the missing piece.

* fix pr comments

* bug fix & tidy up

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-22 06:22:46 -05:00
Luke Parker
07c657306b Remove duplicated genesis presence in Tributary Scanner DB keys
This wasted 32-bytes per every single entry in the DB (ignoring de-duplication
possible by the DB layer).
2023-11-22 04:43:49 -05:00
David Bell
9df8c9476e Convert tributary to use create_db macro (#448)
* chore: convert tributary to use create_db macro

* chore: fix fmt

* chore: break long line
2023-11-22 04:17:51 -05:00
econsta
9ab2a2cfe0 processor/db.rs macro implimentation (#437)
* processor/db.rs macro implimentation

* ran clippy and fmt

* incorporated recommendations

* used empty uple instead of [u8; 0]

* ran fmt
2023-11-22 03:58:26 -05:00
Luke Parker
a315040aee if-watch 3.2.0 2023-11-22 01:24:17 -05:00
Luke Parker
1822e31142 Grow the yamux buffers to exceed the maximum message size 2023-11-21 02:01:41 -05:00
Luke Parker
6efc313d76 Add/update msrv for common/*, crypto/*, coins/*, and substrate/*
This includes all published crates.
2023-11-21 01:19:40 -05:00
Luke Parker
4b85d4b03b Rename SubstrateSigner to BatchSigner 2023-11-20 22:21:52 -05:00
econsta
05d8c32be8 Multisig db (#444)
* implement db macro for processor/multisigs/db.rs

* ran fmt

* cargo +nightly fmt

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 21:40:54 -05:00
econsta
8634d90b6b implement db macro for processor/signer.rs (#438)
* implement db macro for processor/signer.rs

* Use ()

* CompletedDb -> CompletionsDb

* () -> &()

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 03:03:45 -05:00
econsta
aa9daea6b0 implement db macro for processor/substrate_signer (#439)
* implement db macro for processor/substrate_signer

* Use ()

* Correct AttemptDb usage of ()

* () -> &()

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 03:03:35 -05:00
Luke Parker
942873f99d Non-intrusive cargo update 2023-11-20 02:31:47 -05:00
Luke Parker
7c9b581723 Remove dtolnay's rust-toolchain action (#442)
* Remove dtolnay's rust-toolchain action

I believe our rust-toolchain.toml handles its use case exactly.

I don't believe this'll work, as it'd require rustup install a cargo stub
before any toolchain is installed, yet I want to confirm it doesn't.

* Place quotes around nightly toolchain version

* Put toolchain before options to resolve what appears to be a bug in rustup's help strings

* Add wasm32-unkknown-unknown to clippy workflow
2023-11-20 02:31:22 -05:00
Luke Parker
b37a0db538 Move where the RISC-V toolchain is installed during the no-std workflow 2023-11-19 22:45:51 -05:00
Luke Parker
797604ad73 Replace usage of io::Error::new(io::ErrorKind::Other, with io::Error::other
Newly possible with Rust 1.74.
2023-11-19 18:31:37 -05:00
Luke Parker
05b975dff9 Rust 1.74
Adds a Rust toolchain file to be less disruptive to developers who don't keep
their toolchain synchronized (by now having rustup automatically synchronize).

Hopefully helps resolve how +nightly clippy may pass for the coordinator, yet
building would fail due to stable's (hopefully prior?) failure to model some
async functions re: Send/Sync.

Also adds rust-src as a component in preparation of
https://github.com/paritytech/polkadot-sdk/pull/2217
2023-11-19 17:47:46 -05:00
Luke Parker
74a8df4c7b Add a new primitive of a DB-backed channel
The coordinator already had one of these, albeit implemented much worse than
the one now properly introduced. It had to either be sending or receiving,
whereas the new one can do both at the same time.

This replaces said instance and enables pleasant patterns when implementing the
processor/coordinator.
2023-11-19 02:05:01 -05:00
Luke Parker
bd14db9f76 Version pin foundry
Fixes #116 and #441.
2023-11-19 01:21:43 -05:00
Luke Parker
25c02c1311 Ban Serai from setting keys
They were already banned form publishing `Batch`s, yet the ability to set keys
enabled changing how certain functionality in the coordinator operated.
Removing this malleability ensures operation as expected.
2023-11-18 21:28:16 -05:00
Luke Parker
b018fc432c Add deletion to create_db 2023-11-18 20:56:58 -05:00
Luke Parker
6e4ecbc90c Support trailing commas in create_db 2023-11-18 20:54:37 -05:00
Luke Parker
be48dcc4a4 Use multiple LibP2P topics
We still need a peering protocol... Hopefully, we can read peers off of the
Substrate node's DHT.
2023-11-18 20:37:55 -05:00
Luke Parker
25066437da Attempt to resolve #434
Worsens the coordinator tests' ensuring of the validity of the cosigning
protocol, yet should actually properly model it.
2023-11-18 00:12:36 -05:00
Luke Parker
db49a63c2b Bump zeroize to appease cargo deny 2023-11-16 14:18:00 -05:00
Luke Parker
ee50f584aa Add saner log statements to the coordinator
Disables trace on every single P2P message.

Logs a short-form of each transaction.
2023-11-16 13:37:39 -05:00
Luke Parker
14f3f330db Remove deadlock in cosign_evaluator 2023-11-16 13:32:15 -05:00
Luke Parker
30a77d863f Include non-JSON response from Monero node in error 2023-11-15 22:57:37 -05:00
Luke Parker
a03a1edbff Remove needless transposition in coordinator 2023-11-15 22:49:58 -05:00
Luke Parker
c03afbe03e Downgrade RustCrypto packages which violate Rust's interpretation of semver 2023-11-15 21:24:52 -05:00
Luke Parker
af611a39bf Restore runtime feature to hyper 2023-11-15 20:40:24 -05:00
Luke Parker
0d080e6d34 Remove unused dependency from dex-pallet 2023-11-15 20:24:33 -05:00
Luke Parker
369af0fab5 \#339 addendum 2023-11-15 20:23:19 -05:00
Luke Parker
d25e3d86a2 Make TLS an optional feature of simple-request
Removes 14 crates from the tree when compiling the message-queue client.

Also performs a non-intrusive cargo update.
2023-11-15 17:24:11 -05:00
Luke Parker
96f1d26f7a Add a cosigning protocol to ensure finalizations are unique (#433)
* Add a function to deterministically decide which Serai blocks should be co-signed

Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.

* Get all active tributaries we're in at a specific block

* Add and route CosignSubstrateBlock, a new provided TX

* Split queued cosigns per network

* Rename BatchSignId to SubstrateSignId

* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it

* Handle the CosignSubstrateBlock provided TX

* Revert substrate_signer.rs to develop (and patch to still work)

Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.

* Route cosigning through the processor

* Add note to rename SubstrateSigner post-PR

I don't want to do so now in order to preserve the diff's clarity.

* Implement cosign evaluation into the coordinator

* Get tests to compile

* Bug fixes, mark blocks without cosigners available as cosigned

* Correct the ID Batch preprocesses are saved under, add log statements

* Create a dedicated function to handle cosigns

* Correct the flow around Batch verification/queueing

Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).

Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.

When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.

This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.

Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.

* Working full-stack tests

After the last commit, this only required extending a timeout.

* Replace "co-sign" with "cosign" to make finding text easier

* Update the coordinator tests to support cosigning

* Inline prior_batch calculation to prevent panic on rotation

Noticed when doing a final review of the branch.
2023-11-15 16:57:21 -05:00
Luke Parker
79e4cce2f6 Explicitly set features in modular-frost 2023-11-13 05:31:47 -05:00
Luke Parker
0c341e3546 Fix no-std builds 2023-11-13 05:19:53 -05:00
Luke Parker
9f0790fb83 Remove RecommendedTranscript from DKG MuSig
Resolves #391.

Given this code already wasn't modular/composable, this should be overall
equivalent regarding functionality and security. It's much less opinionated
though and has fewer dependencies.
2023-11-13 05:11:40 -05:00
Luke Parker
bb8e034e68 Test historic start times in tendermint-machine
Closes https://github.com/serai-dex/serai/issues/342.

Under ideal network conditions, this is fine. While I won't claim ideal network
conditions will occur IRL, b0fcdd3367 has the
Tributary rebroadcast messages and should brute-force its way into a
functioning system.
2023-11-13 00:43:35 -05:00
Luke Parker
3f7bdaa64b Make the output distribution cache only available under a feature
Enables a mode with reduced memory usage *and* increased safety given current
unsafety of the cache.

Relevant to https://github.com/serai-dex/serai/issues/415.
2023-11-13 00:24:54 -05:00
Luke Parker
351436a258 Dockerfile Parts (#428)
* De-duplicate Dockerfiles by using a bash file to concatenate common parts

Resolves #375.

Dockerfiles are still committed to the repo to avoid a dependency on bash.

* Add a CI job to confirm the committed dockerfiles are the currently generated ones

* Create dedicated Dockerfiles per processor network

Ensures the compromising of network-specific dependencies doesn't lead to a
compromise of the build process for all processors.

* Dockerfile corrections

* Correct call to build processor Docker image in tests/processor
2023-11-12 23:55:15 -05:00
David Bell
c328e5ea68 Convert coordinator/tributary/nonce_decider to use create_db macro (#423)
* chore: convert nonce_deicer to use create_db macro

* Restore pub NonceDecider

* Remove extraneous comma

I forgot to run git commit --amend on the prior commit :/

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 12:04:34 -05:00
Boog900
995734c960 Monero: add more legacy verify functions (#383)
* Add v1 ring sig verifying

* allow calculating signature hash for v1 txs

* add unreduced scalar type with recovery

I have added this type for borromen sigs, the ee field can be a normal
scalar as in the verify function the ee
field is checked against a reduced scalar mean for it to verify as
correct ee must be reduced

* change block major/ minor versions to u8

this matches Monero

I have also changed a couple varint functions to accept the `VarInt`
trait

* expose `serialize_hashable` on `Block`

* add back MLSAG verifying functions

I still need to revert the commit removing support for >1 input MLSAG FULL

This adds a new rct type to separate Full and simple rct

* add back support for multiple inputs for RCT FULL

* comment `non_adjacent_form` function

also added `#[allow(clippy::needless_range_loop)]` around a loop as without a re-write satisfying clippy without it will make the function worse.

* Improve Mlsag verifying API

* fix rebase errors

* revert the changes on `reserialize_chain`
plus other misc changes

* fix no-std

* Reduce the amount of rpc calls needed for `get_block_by_number`.
This function was causing me problems, every now and then a node would return a block with a different number than requested.

* change `serialize_hashable` to give the POW hashing blob.

Monero calculates the POW hash and the block hash using *slightly* different blobs :/

* make ring_signatures public and add length check when verifying.

* Misc improvements and bug fixes

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 10:18:18 -05:00
Luke Parker
54f1929078 Route blame between Processor and Coordinator (#427)
* Have processor report errors during the DKG to the coordinator

* Add RemoveParticipant, InvalidDkgShare to coordinator

* Route DKG blame around coordinator

* Allow public construction of AdditionalBlameMachine

Necessary for upcoming work on handling DKG blame in the processor and
coordinator.

Additionally fixes a publicly reachable panic when commitments parsed with one
ThresholdParams are used in a machine using another set of ThresholdParams.

Renames InvalidProofOfKnowledge to InvalidCommitments.

* Remove unused error from dleq

* Implement support for VerifyBlame in the processor

* Have coordinator send the processor share message relevant to Blame

* Remove desync between processors reporting InvalidShare and ones reporting GeneratedKeyPair

* Route blame on sign between processor and coordinator

Doesn't yet act on it in coordinator.

* Move txn usage as needed for stable Rust to build

* Correct InvalidDkgShare serialization
2023-11-12 07:24:41 -05:00
akildemir
d015ee96a3 Dex improvements (#422)
* remove dex traits&balance types

* remove liq tokens pallet in favor of coins-pallet instance

* fix tests & benchmarks

* remove liquidity tokens trait

* fix CI

* fix pr comments

* Slight renamings

* Add burn_with_instruction as a negative to LiquidityTokens CallFilter

* Remove use of One, Zero, Saturating taits in dex pallet

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 06:37:31 -05:00
Luke Parker
a43815f101 Restore Foundry to a test dependency via direct usage of solc 2023-11-12 04:34:45 -05:00
Luke Parker
7f1732c8c0 cargo update to snow 0.9.4 2023-11-12 00:40:32 -05:00
Luke Parker
ed2445390f Replace post-detection of if a Plan is forwarded by noting if it's from the scanner 2023-11-09 14:54:38 -05:00
Luke Parker
52a0c56016 Rename Network::address to Network::external_address
Improves clarity since we now have 4 addresses.
2023-11-09 14:31:46 -05:00
Luke Parker
42e8f2c8d8 Add OutputType::Forwarded to ensure a user's transfer in isn't misclassified
If a user transferred in without an InInstruction, and the amount exactly
matched a forwarded output, the user's output would fulfill the
forwarding. Then the forwarded output would come along, have no InInstruction,
and be refunded (to the prior multisig) when the user should've been refunded.

Adding this new address type resolves such concerns.
2023-11-09 14:24:13 -05:00
Luke Parker
b51204a4eb Replace usage of ethers-signers with 11 lines of ECDSA code 2023-11-09 13:22:43 -05:00
Luke Parker
ec51fa233a Document an accepted false positive 2023-11-09 12:41:15 -05:00
Luke Parker
ce4091695f Document a choice of variable name 2023-11-09 12:38:06 -05:00
Luke Parker
24919cfc54 Resolve race condition regarding when forwarded output is set
The higher-level scanner code in multisigs/mod.rs now creates a series of plans
with limited context. These include forwarding and refunding plans, moving all
handling of forwarding flags on the scanner's clock and therefore safe.

Also simplifies the refunding a decent bit.
2023-11-09 12:37:07 -05:00
Luke Parker
bf41009c5a Document critical race condition due to two distinct clocks operating over the same data 2023-11-09 08:41:22 -05:00
Luke Parker
e8e9e212df Move additional functions which retry until success into Network trait 2023-11-09 07:16:15 -05:00
Luke Parker
19187d2c30 Implement calculation of monotonic network times for Bitcoin and Monero 2023-11-09 07:02:52 -05:00
Luke Parker
43ae6794db Remove invalid TODOs from processor signers 2023-11-09 03:53:30 -05:00
Luke Parker
978134a9d1 Remove events from SubstrateSigner
Same vibes as prior commit.
2023-11-09 01:56:09 -05:00
Luke Parker
2eb155753a Remove the Signer events pseudo-channel for a returned message
Also replaces SignerEvent with usage of ProcessorMessage directly.
2023-11-09 01:26:30 -05:00
Luke Parker
7d72e224f0 Remove Output::amount and move Payment from Amount to Balance
This code is still largely designed around the idea a payment for a network is
fungible with any other, which isn't true. This starts moving past that.

Asserts are added to ensure the integrity of coin to the scheduler (which is
now per key per coin, not per key alone) and in Bitcoin/Monero prepare_send.
2023-11-08 23:33:25 -05:00
Luke Parker
ffedba7a05 Update processor tests to refund logic 2023-11-08 21:59:11 -05:00
Luke Parker
06e627a562 Support refunds as possible for invalidly received outputs on Serai 2023-11-08 11:26:28 -05:00
Luke Parker
11f66c741d Remove ethers-middleware 2023-11-08 08:19:12 -05:00
Luke Parker
a0a2ef22e4 Remove ethers-solc
ethers-solc was used for a type (now manually specified) and to call out to
solc. Since Foundry was already a documented dependency, a call to it now
handles building.

Removing this single crate removes a total of 17 crates from our dependency
tree. While these may still be around due to Foundry, they at least may not
be.

Further work to remove the requirement on Foundry for solc alone would be
appreciated.
2023-11-08 06:25:35 -05:00
Luke Parker
5e290a29d9 Remove frame-benchmarking-cli
Not currently used, notably increases our dependency tree.

I wouldn't remove it if we planned to use it. From my understanding, all
benchmarking will be per pallet, voiding our need to have this for the node.
2023-11-08 05:59:56 -05:00
Luke Parker
a688350f44 Have processor's Network::new sleep until booted, not panic 2023-11-08 03:21:28 -05:00
Luke Parker
bc07e14b1e Remove async_recursion for a for loop 2023-11-07 23:07:26 -05:00
Luke Parker
e1c07d89e0 Retry RPC requests once on error
I don't like blindly retrying in the Monero library. The amount of errors,
which weren't present with reqwest (well, the error rate was the same, yet due
to a distinct bug this code fixed), demand we do *something* though.

The trace log shows hyper is erroring with 0 bytes of the response read. My
guess is it's somehow a closed connection? A connection pool would detect this
and have created a new connection (as this does, except once finding out
there's an issue).

While we should be able to detect this with `ready()`, we do call ready and it
claims no error. We also can successfully write which makes this... a mess.
Hopefully, it either actually works as intended, yet it at least requires two
consecutive errors which should be much less frequent.
2023-11-07 22:55:29 -05:00
Luke Parker
56fd11ab8d Use a single long-lived RPC connection when authenticated
The prior system spawned a new connection per request to enable parallelism,
yet kept hitting hyper::IncompleteMessages I couldn't track down. This
attempts to resolve those by a long-lived socket.

Halves the amount of requests per-authenticated RPC call, and accordingly is
likely still better overall.

I don't believe this is resolved yet but this is still worth pushing.
2023-11-07 17:42:19 -05:00
Luke Parker
c03fb6c71b Add dedicated BatchSignId 2023-11-06 20:06:36 -05:00
Luke Parker
96f94966b7 Restore accidentally deleted function 2023-11-06 18:37:18 -05:00
Luke Parker
b65ba17007 Fix accumulated bugs 2023-11-06 18:12:53 -05:00
Luke Parker
c9003874ad Remove ethers mono-crate
Reduces size of ethereum-serai and gives us clarity on what's used.

Next should be rmeoving the ethers-provided signing code.
2023-11-06 17:30:50 -05:00
Luke Parker
205bec36e5 try_from -> from 2023-11-06 17:00:09 -05:00
Luke Parker
df8b455d54 Don't generate RuntimeCall::System
Completely unused yet would be permanently part of our protocol if left alone.
2023-11-06 16:59:30 -05:00
Luke Parker
84a0bcad51 Move monero-serai to simple-request
Deduplicates code across the entire repo, letting us make improvements in a
single place.
2023-11-06 11:45:33 -05:00
Luke Parker
b680bb532b Don't default to basic-auth if it's enabled, yet require it to be specified 2023-11-06 10:42:01 -05:00
Luke Parker
b9983bf133 Replace reqwest with simple-request
reqwest was replaced with hyper and hyper-rustls within monero-serai due to
reqwest *solely* offering a connection pool API. In the process, it was
demonstrated how quickly we can achieve equivalent functionality to reqwest for
our use cases with a fraction of the code.

This adds our own reqwest alternative to the tree, applying it to both
bitcoin-serai and message-queue. By doing so, bitcoin-serai decreases its tree
by 21 packages and the processor by 18. Cargo.lock decreases by 8 dependencies,
solely adding simple-request. Notably removed is openssl-sys and openssl.

One noted decrease functionality is the requirement on the system having
installed CA certificates. While we could fallback to the rustls certificates
if the system doesn't have any, that's blocked by
https://github.com/rustls/hyper-rustls/pulls/228.
2023-11-06 09:47:12 -05:00
Luke Parker
cddb44ae3f Bitcoin tweaks + cargo update
Removes bitcoin-serai's usage of sha2 for bitcoin-hashes. While sha2 is still
in play due to modular-frost (more specifically, due to ciphersuite), this
offers a bit more performance (assuming equivalency between sha2 and
bitcoin-hashes' impl) due to removing a static for a const.

Makes secp256k1 a dev dependency for bitcoin-serai. While secp256k1 is still
pulled in via bitcoin, it's hopefully slightly better to compile now and makes
usage of secp256k1 an implementation detail of bitcoin (letting it change it
freely).

Also offers slightly more efficient signing as we don't decode to a signature
just to re-encode for the transaction.

Removes a 20s sleep for a check every second, up to 20 times, for reduced test
times in the processor.
2023-11-06 07:38:36 -05:00
hinto.janai
bd3272a9f2 replace lazy_static! with once_cell::sync::Lazy 2023-11-06 05:31:46 -05:00
Luke Parker
de41be6e26 Slash on SignCompleted for unrecognized plan 2023-11-05 13:42:01 -05:00
Luke Parker
b8ac8e697b Add missing crate to tests, remove no longer present RUSTSEC ignore 2023-11-05 12:11:08 -05:00
akildemir
899a9604e1 Add Dex pallet (#407)
* Move pallet-asset-conversion

* update licensing

* initial integration

* Integrate Currency & Assets types

* integrate liquidity tokens

* fmt

* integrate dex pallet tests

* fmt

* compilation error fixes

* integrate dex benchmarks

* fmt

* cargo clippy

* replace all occurrences of "asset" with "coin"

* add the actual add liq/swap logic to in-instructions

* add client side & tests

* fix deny

* Lint and changes

- Renames InInstruction::AddLiquidity to InInstruction::SwapAndAddLiquidity
- Makes create_pool an internal function
- Makes dex-pallet exclusively create pools against a native coin
- Removes various fees
- Adds new crates to GH workflow

* Fix rebase artifacts

* Correct other rebase artifact

* Correct CI specification for liquidity-tokens

* Correct primitives' test to the standardized pallet account scheme

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 12:02:34 -05:00
David Bell
facb5817c4 Database Macro (#408)
* db_macro

* wip: converted prcessor/key_gen to use create_db macro

* wip: converted prcessor/key_gen to use create_db macro

* wip: formatting

* fix: added no_run to doc

* fix: documentation example had extra parenths

* fix: ignore doc test entirely

* Corrections from rebasing

* Misc lint

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 09:47:24 -05:00
akildemir
97fedf65d0 add reasons to slash evidence (#414)
* add reasons to slash evidence

* fix CI failing

* Remove unnecessary clones

.encode() takes &self

* InvalidVr to InvalidValidRound

* Unrelated to this PR: Clarify reasoning/potentials behind dropping evidence

* Clarify prevotes in SlashEvidence test

* Replace use of read_to_end

* Restore decode_signed_message

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 00:04:41 -04:00
Luke Parker
257323c1e5 log::debug all Monero RPC errors 2023-11-05 00:02:58 -04:00
Luke Parker
deecd77aec Don't drop the request sender before we finish reading the response
It *looks like* hyper will drop the connection once its request sender is
dropped, regardless of if the last request hasn't had its response completed.
This attempts to resolve some spurious connection errors.
2023-11-04 22:55:47 -04:00
Luke Parker
360b264a0f Remove unused dependencies 2023-11-04 19:26:38 -04:00
Luke Parker
e05b77d830 Support multiple key shares per validator (#416)
* Update the coordinator to give key shares based on weight, not based on existence

Participants are now identified by their starting index. While this compiles,
the following is unimplemented:

1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.

* Add a fn to the DkgConfirmer to convert `i` values as needed

Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.

* Have the Tributary DB track participation by shares, not by count

* Prevent a node from obtaining 34% of the maximum amount of key shares

This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.

* Correct the mechanism to detect if sufficient accumulation has occured

It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.

* Finish updating the coordinator to handle a multiple key share per validator environment

* Adjust stategy re: preventing noce reuse in DKG Confirmer

* Add TODOs regarding dropped transactions, add possible TODO fix

* Update tests/coordinator

This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.

* Update processor key_gen to handle generating multiple key shares at once

* Update SubstrateSigner

* Update signer, clippy

* Update processor tests

* Update processor docker tests
2023-11-04 19:26:13 -04:00
Luke Parker
5970a455d0 Replace crc dependency with our own crc implementation
It's ~30 lines to remove 2 crates in our tree.
2023-11-03 06:44:23 -04:00
Luke Parker
4c9e3b085b Add a String to Monero ConnectionErrors debugging the issue
We're reaching this in CI so there must be some issue present.
2023-11-03 05:45:33 -04:00
github-actions[bot]
a2089c61fb November 2023 - Rust Nightly Update (#413)
* Update nightly

* Replace .get(0) with .first()

* allow new clippy lint

---------

Co-authored-by: GitHub Actions <>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-03 05:28:07 -04:00
Luke Parker
ae449535ff Correct no-std builds 2023-10-31 07:55:25 -04:00
Luke Parker
05dc474cb3 Correct std feature-flagging
If a crate has std set, it should enable std for all dependencies in order to
let them properly select which algorithms to use. Some crates fallback to
slower/worse algorithms on no-std.

Also more aggressively sets default-features = false leading to a *10%*
reduction in the amount of crates coordinator builds.
2023-10-31 07:44:02 -04:00
Luke Parker
34bcb9eb01 bitcoin 0.31 2023-10-31 03:47:45 -04:00
Luke Parker
2958f196fc ed448 with generic-array 1.0 2023-10-29 09:07:14 -04:00
Luke Parker
92cebcd911 Add ca-certificates to processor Docker image 2023-10-28 04:06:00 -04:00
Luke Parker
a3278dfb31 Add ca-certificates as a dependency to the CI 2023-10-28 00:45:00 -04:00
Luke Parker
f02db19670 cargo update
Removes a pair of duplicated dependencies.
2023-10-27 23:14:54 -04:00
Luke Parker
652c878f54 Note slight malleability in batch verification 2023-10-27 23:08:31 -04:00
Luke Parker
da01de08c9 Restore logger init in processor tests 2023-10-27 23:08:06 -04:00
Luke Parker
052ef39a25 Replace reqwest with hyper in monero-serai
Ensures a connection pool isn't used behind-the-scenes, as necessitated by
authenticated connections.
2023-10-27 23:05:47 -04:00
Luke Parker
87fdc8ce35 Use Build Dependencies over Test Dependencies where we can 2023-10-26 15:15:29 -04:00
Luke Parker
86ff0ae71b No longer run processor tests again when testing against 0.17.3.2
Even though the intent was to test against 0.17.3.2, and a Monero 0.17.3.2 node
was running, the processor now uses docker which will always use 0.18.
Accordingly, while the intent was valid, it was pointless.

This is unfortunate, as testing against 0.17 helped protect against edge cases.
The infra to preserve their tests isn't worth the benefit we'd gain from said
tests however.
2023-10-26 14:29:51 -04:00
Luke Parker
3069138475 Fix handling of Monero daemon connections when using an authenticated RPC
The lack of locking the connection when making an authenticated request, which
is actually two sequential requests, risked another caller making a request in
between, invalidating the state.

Now, only unauthenticated connections share a connection object.
2023-10-26 12:45:39 -04:00
Luke Parker
f22aedc007 Add tweaks meant for prior commit 2023-10-24 04:34:47 -04:00
Luke Parker
7be20c451d cargo update, resolving two yanked crates (ahash, lalrpop) 2023-10-24 04:03:51 -04:00
Luke Parker
0198d4cc46 Add a TributaryState struct with higher-level DB logic 2023-10-24 03:55:13 -04:00
Luke Parker
7c10873cd5 Tweak how the Monero node is run for the processor tests
Disables the unused zmq RPC.

Removes authentication which seems to be unstable as hell when under load
(see #351).

No longer use Network::Isolated as it's not needed here (the Monero nodes run
with `--offline`).
2023-10-23 07:56:55 -04:00
Luke Parker
08180cc563 Resolve #405 2023-10-23 07:56:43 -04:00
Luke Parker
c4bdbdde11 dockertest 0.4 (#406)
* Updates to modern dockertest

* More updates to latest dockertest

* Update Cargo.lock to dockertest with handle restored

* clippy coordinator tests

* clippy full-stack tests

* Remove kayabaNerve branch for official repo's latest commit hash

* Update serai-client, remove reliance on the existence of a handle fn

* Don't use the hex encoding of unique_id in dockertests

Gets our hostnames just below 64 bytes, resolving test failures on at least
Debian-based systems.

* Use Network::Isolated for all dockertest instances

* Correct error from prior commit's edits
2023-10-23 06:59:38 -04:00
Luke Parker
0d23964762 Resolve #335 2023-10-23 05:10:13 -04:00
Luke Parker
fbf51e53ec Resolve #327
Also runs `cargo update` and moves where we install the wasm toolchain in the
Dockerfile for better caching properties.
2023-10-23 00:45:00 -04:00
Luke Parker
fd1826cca9 Implement a fee on every input to prevent prior described economic attacks
Completes #297.
2023-10-22 21:31:13 -04:00
Luke Parker
f561fa9ba1 Fix a bug which would attempt to create a transaction with N::MAX_INPUTS + 1 2023-10-22 19:15:52 -04:00
Luke Parker
f4fc539e14 Remove constants inlined into bitcoin-serai for bitcoin::policy-provided constants 2023-10-22 18:08:36 -04:00
Luke Parker
0fff5391a8 Improve the reasoning for why the Bitcoin DUST constant is set as it is
Also halves the minimum fee policy, which still may be 2x-4x higher than
necessary due to API limitations within bitcoin-serai (which we can fix as it's
within our scope).
2023-10-22 18:06:44 -04:00
Luke Parker
a71a789912 Monero median_fee fn 2023-10-22 17:43:21 -04:00
Luke Parker
83c41eccd4 Bitcoin Dust constant justification, median_fee fn 2023-10-22 07:03:33 -04:00
Luke Parker
e5113c333e Move LastBatchBlock, LastBatch sets by their checks 2023-10-22 05:49:19 -04:00
Luke Parker
6068978676 Use a transaction layer when executing each InInstruction 2023-10-22 05:46:03 -04:00
Luke Parker
e7e30150f0 Don't let the Serai set publish batches 2023-10-22 05:38:44 -04:00
Luke Parker
55fe27f41a Don't allow the Bitcoin set to mint sriETH 2023-10-22 05:37:23 -04:00
Luke Parker
d66a7ee43e Remove the staking pallet for validator-sets alone
The staking pallet is an indirection which offered no practical benefit yet
increased the overhead of every call.
2023-10-22 04:00:42 -04:00
Luke Parker
a702d65c3d validator-sets pallet event expansion 2023-10-22 03:28:42 -04:00
Luke Parker
52eb68677a pallet-staking event 2023-10-22 03:19:01 -04:00
Luke Parker
d29d19bdfe Ensure pallet-session is initialized after staking 2023-10-22 02:52:34 -04:00
Luke Parker
c3fdb9d9df Use a comprehensive call filter in the runtime 2023-10-21 21:35:14 -04:00
Luke Parker
46e1f85085 Remove unnecessary ENV flags from Bitcoin Dockerfile 2023-10-21 20:12:41 -04:00
Luke Parker
1bff2a0447 Add signals pallet
Resolves #353

Implements code such that:

- 80% of validators (by stake) must be in favor of a signal for the network to
  be
- 80% of networks (by stake) must be in favor of a signal for it to be locked
  in
- After a signal has been locked in for two weeks, the network halts

The intention is to:

1) Not allow validators to unilaterally declare new consensus rules.

No method of declaring new consensus rules is provided by this pallet. Solely a
way to deprecate the current rules, with a signaled for successor. All nodes
must then individually decide whether or not to download and run a new node
which has new rules, and if so, which rules.

2) Not place blobs on chain.

Even if they'd be reproducible, it's just a lot of data to chuck on the
blockchain.
2023-10-21 20:06:55 -04:00
Luke Parker
b66203ae3f Update Bitcoin Docker image to 25.1
Also decreases the Bitcoin dummy fee.
2023-10-20 18:52:43 -04:00
Luke Parker
43a182fc4c Reduce dummy fee used by Monero 2023-10-20 17:57:02 -04:00
Luke Parker
8ead7a2581 Add missing std feature flags to a couple Substrate dependencies 2023-10-20 17:53:07 -04:00
Luke Parker
3797679755 Track total allocated stake in validator-sets pallet 2023-10-20 16:58:44 -04:00
Luke Parker
c056b751fe Remove Fee from the Network API
The only benefit to having it would be the ability to cache it across
prepare_send, which can be done internally to the Network.
2023-10-20 16:12:28 -04:00
Luke Parker
5977121c48 Don't mutate Plans when signing
This is achieved by not using the Plan struct anymore, yet rather its
decomposition. While less ergonomic, it meets our wants re: safety.
2023-10-20 10:56:18 -04:00
Luke Parker
7b6181ecdb Remove Plan ID non-determinism leading Monero to have distinct TX fees
Monero would select decoys with a new RNG seed, which may have used more bytes,
increasing the fee.

There's a few comments here.

1) Non-determinism wasn't removed via distinguishing the edits. It was done by
   removing part of the transcript. A TODO exists to improve this.
2) Distinct TX fees is a test failure, not an issue in prod *unless* the distinct
   fee is greater. So long as the distinct fee is lesser, it's fine.
3) Removing outputs is expected to only decrease fees.
2023-10-20 08:11:42 -04:00
EmmanuelChthonic
f976bc86ac fix key fn not being called in getter 2023-10-20 07:34:19 -04:00
Luke Parker
441bf62e11 Simplify amortize_fee, correct scheduler's amortizing of branch fees 2023-10-20 05:40:16 -04:00
Luke Parker
4852dcaab7 Move common code from prepare_send into Network trait 2023-10-20 04:42:08 -04:00
Luke Parker
d6bc1c1ea3 Explicitly only adjust operating costs when plan.change.is_some()
The existing code should've mostly handled this fine. Only a single edge case
(TX fee reduction on no-change Plans) would cause an improper increase in
operating costs.
2023-10-19 23:16:04 -04:00
Luke Parker
7b2dec63ce Don't scan outputs which are dust, track dust change as operating costs
Fixes #299.
2023-10-19 08:02:10 -04:00
Luke Parker
d833254b84 Other clippy fixes 2023-10-19 08:01:15 -04:00
Luke Parker
a1b2bdf0a2 clippy fixes 2023-10-19 06:30:58 -04:00
Luke Parker
43841f95fc cargo fmt 2023-10-19 06:24:52 -04:00
akildemir
fdfce9e207 Coins pallet (#399)
* initial implementation

* add function to get a balance of an account

* add support for multiple coins

* rename pallet to "coins-pallet"

* replace balances, assets and tokens pallet with coins pallet in runtime

* add total supply info

* update client side for new Coins pallet

* handle fees

* bug fixes

* Update FeeAccount test

* Fmt

* fix pr comments

* remove extraneous Imbalance type

* Minor tweaks

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-19 06:22:21 -04:00
Luke Parker
3255c0ace5 Track and amortize operating costs to ensure solvency
Implements most of #297 to the point I'm fine closing it. The solution
implemented is distinct than originally designed, yet much simpler.

Since we have a fully-linear view of created transactions, we don't have to
per-output track operating costs incurred by that output. We can track it
across the entire Serai system, without hooking into the Eventuality system.

Also updates documentation.
2023-10-19 03:13:44 -04:00
Luke Parker
057c3b7cf1 libp2p 0.52.4
Adds several packages into tree, deprecates an API we use. This commit does
update accordingly.

While this may not be preferable, it is inevitable.
2023-10-19 00:27:21 -04:00
Luke Parker
a29c410caf cargo update parking_lot_core, adding redox_syscall 2023-10-19 00:04:13 -04:00
Luke Parker
a6f40978cb cargo update regex, adding regex-syntax 2023-10-19 00:02:55 -04:00
Luke Parker
bdf6afc144 cargo update time, adding dep powerfmt 2023-10-19 00:02:27 -04:00
Luke Parker
a362b0a451 Non-intrusive cargo update
Updates yanked wasm-opt, updates tracing due to use-after-free, updates
everything else which doesn't grow the tree.
2023-10-18 21:45:49 -04:00
Luke Parker
041ed46171 Correct panic possible when jumping to a round with Precommit(None) 2023-10-18 16:46:14 -04:00
Luke Parker
9accddb2d7 cargo update libp2p-identity
Finally removes curve25519-dalek 3.2.
2023-10-16 15:07:39 -04:00
Luke Parker
e06e4edc8e Modernize protocol documentation 2023-10-16 05:11:31 -04:00
Luke Parker
e4d59eeeca Remove hashbrown from validator-sets pallet 2023-10-16 01:47:15 -04:00
Luke Parker
1f5b1bb514 Update to lazy_static 1.5.0
Requires a git dependency due to
https://github.com/rust-lang-nursery/lazy-static.rs/issues/201.

Justified due to bumping the internally spin to 0.9 (containing a bug fix for
0.9 in general (https://github.com/rust-lang-nursery/lazy-static.rs/pull/199)),
surpassing 0.7 (offering performance
(https://github.com/rust-lang-nursery/lazy-static.rs/pull/182)).
2023-10-16 01:39:15 -04:00
Luke Parker
1a3b6005c6 Various cargo updates 2023-10-15 23:57:45 -04:00
Luke Parker
7dc1a24bce Move DkgConfirmer to its own file, document 2023-10-15 01:39:56 -04:00
Luke Parker
3483f7fa73 Call fatal_slash where easy and appropriate 2023-10-15 00:32:51 -04:00
Luke Parker
a300a1029a Load/save first_preprocess with RecognizedIdType
Enables their IDs to have conflicts across each other.
2023-10-14 21:58:10 -04:00
Luke Parker
7409d0b3cf Rename add_active_tributary for clarity 2023-10-14 21:53:38 -04:00
Luke Parker
19e90b28b0 Have Tributary's add_transaction return a proper error
Modifies main.rs to properly handle the returned error.
2023-10-14 21:50:11 -04:00
Luke Parker
584943d1e9 Modify SubstrateBlockAck as needed
Replaces plan IDs with key + ID, letting the coordinator determine the sessions
for the plans.

Properly scopes which plan IDs are set on which tributaries, and ensures we
have the necessary tributaries at time of handling.
2023-10-14 20:37:54 -04:00
Luke Parker
62e1d63f47 Abort the P2P meta task when dropped
This should cause full cleanup of all Tributary async tasks, since the machine
already cleans itself up on drop.
2023-10-14 20:08:51 -04:00
Luke Parker
e4adaa8947 Further tweaks re: retiry 2023-10-14 19:55:14 -04:00
Luke Parker
3b3fdd104b Most of coordinator Tributary retiry
Adds Event::SetRetired to validator-sets.

Emit TributaryRetired.

Replaces is_active_set, which made multiple network requests, with
is_retired_tributary, a DB read.

Performs most of the removals necessary upon TributaryRetired.

Still needs to clean up the actual Tributary/Tendermint tasks.
2023-10-14 16:47:25 -04:00
Luke Parker
5897efd7c7 Clean out create_new_tributary
It made sense when the task was in main.rs. Now that it isn't, it's a pointless
indirection.
2023-10-14 16:09:24 -04:00
Luke Parker
863a7842ca Have every node respond to Heartbeat so they don't download the messages over the net 2023-10-14 15:27:40 -04:00
Luke Parker
f414735be5 Redo new_tributary from being over ActiveTributary to TributaryEvent
TributaryEvent also allows broadcasting a retiry event.
2023-10-14 15:27:39 -04:00
Luke Parker
5c5c097da9 Tweaks for processor to work with the new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
7d4e8b59db Update dockertests to new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
e3e9939eaf Tidy Serai use in coordinator to new API 2023-10-14 15:26:36 -04:00
Luke Parker
530fba51dd Update coordinator to new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
cb61c9052a Reorganize serai-client
Instead of functions taking a block hash, has a scope to a block hash before
functions can be called.

Separates functions by pallets.
2023-10-14 15:26:36 -04:00
Luke Parker
96cc5d0157 Remove a TODO re: an unhandled race condition 2023-10-14 00:41:07 -04:00
Luke Parker
7275a95907 Break handle_processor_messages out to handle_processor_message, move a helper fn to substrate 2023-10-13 23:36:07 -04:00
Luke Parker
80e5ca9328 Move heartbeat_tributaries and handle_p2p to p2p.rs 2023-10-13 22:40:11 -04:00
Luke Parker
67951c4971 Localize scan_substrate as substrate::scan_task 2023-10-13 22:31:54 -04:00
Luke Parker
4143fe9f47 Move scan_tributaries, shrinking coordinator's main.rs 2023-10-13 22:30:13 -04:00
Luke Parker
a73b19e2b8 Tweak coordinator test timing 2023-10-13 21:46:26 -04:00
Luke Parker
97c328e5fb Check tributaries are active before declaring them relevant 2023-10-13 21:46:17 -04:00
Luke Parker
96c397caa0 Add content-based deduplication to the tests' shimmed P2P
The tests have recently had their timing stilted, causing failures. The tests
are... fine. They're fragile, as obvious, yet they're logical. The simplest fix
is to unstilt their timing rather to make them non-fragile.

The recent change, which presumably caused said stilting, was the the
rebroadcasting added. This de-duplication prevents most of the impact of
rebroadcasting. While there's still the async task, and the lock acquisition on
attempt to rebroadcast, this hopefully is enough.
2023-10-13 19:47:58 -04:00
akildemir
d5c6ed1a03 Improve provided handling (#381)
* fix typos

* remove tributary sleeping

* handle not locally provided txs

* use topic number instead of waiting list

* Clean-up, fixes

1) Uses a single TXN in provided
2) Doesn't continue on non-local provided inside verify_block, skipping further
   execution of checks
3) Upon local provision of already on-chain TX, compares

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-13 19:45:47 -04:00
Luke Parker
f6e8bc3352 Alternate handover batch TOCTOU fix (#397)
* Revert "Correct the prior documented TOCTOU"

This reverts commit d50fe87801.

* Correct the prior documented TOCTOU

d50fe87801 edited the challenge for the Batch to
fix it. This won't produce Batch n+1 until Batch n is successfully published
and verified. It's an alternative strategy able to be reviewed, with a much
smaller impact to scope.
2023-10-13 12:14:59 -04:00
Luke Parker
7d0d1dc382 Replace solc-select with svm-rs in CI and docs
svm-rs is already in tree as a library, so we may as well include it as a bin
instead of also pulling in solc-select.
2023-10-13 08:56:25 -04:00
Luke Parker
d50fe87801 Correct the prior documented TOCTOU
Now, if a malicious validator set publishes a malicious `Batch` at the last
moment, it'll cause all future `Batch`s signed by the next validator set to
require a bool being set (yet they never will set it).

This will prevent the handover.

The only overhead is having two distinct `batch_message` calls on-chain.
2023-10-13 04:41:01 -04:00
Luke Parker
e6aa9df428 Document TOCTOU allowing malicious validator set to trigger a handover to an honest set 2023-10-13 04:14:36 -04:00
Luke Parker
02edfd2935 Verify all Batchs published by the prior set
The new set publishing a `Batch` completes the handover protocol. The new set
should only publish a `Batch` once it believes the old set has completed all of
its on-external-chain activity, marking it honest and finite.

With the handover comes the acceptance of liability, hence the requirement for
all of the on-Serai-chain activity also needing verification. While most
activity would be verified in-real-time (upon ::Batch messages), the new set
will now explicitly verify the complete set of `Batch`s before beginning its
preprocess for its own `Batch` (the one accepting the handover).
2023-10-13 04:12:21 -04:00
Luke Parker
9aeece5bf6 Give one weight per key share to validators in Tributary 2023-10-13 02:29:22 -04:00
Luke Parker
bb84f7cf1d Correct ValidatorSets genesis 2023-10-13 01:42:26 -04:00
Luke Parker
bb25baf3bc Add logic to amortize excess key shares, correcting is_bft 2023-10-13 01:04:41 -04:00
Luke Parker
013a0cddfc MAX_VALIDATORS_PER_SET -> MAX_KEY_SHARES_PER_SET 2023-10-13 00:50:07 -04:00
Luke Parker
ed7300b406 Explicitly provide a pre_dispatch which calls validate_unsigned
pre_dispatch is guaranteed by documentation to be called and persisted.
validate_unsigned is not, though the provided pre_dispatch does by default call
validate_unsigned. By explicitly providing our own pre_dispatch, we accomplish
the bounds we require and expect, only being invalidated on Substrate
redefining their API.

We should still test this, yet since we call retire_session in
validate_unsigned, any test of rotation will test it's being properly called.
2023-10-13 00:31:23 -04:00
Luke Parker
88b5efda99 cargo fmt 2023-10-13 00:12:10 -04:00
Luke Parker
0712e6f107 Localize stake into networks
Sets a stake requirement of 100k for Serai and Monero, as Serai doesn't have
stake requirements and Monero isn't expected to see as much
volume/institutional support as Bitcoin/Ethereum.
2023-10-13 00:04:30 -04:00
Luke Parker
6a4c57e86f Define an array of all NetworkIds in serai_primitives 2023-10-12 23:59:21 -04:00
Luke Parker
b7746aa71d Don't allow (de)allocations which remove fault tolerance 2023-10-12 23:47:00 -04:00
Luke Parker
8dd41ee798 Allow immediate deallocation if the decrease doesn't cross a key-share threshold 2023-10-12 23:06:20 -04:00
Luke Parker
9a1d10f4ea Error if deallocation would remove fault tolerance 2023-10-12 23:05:29 -04:00
Luke Parker
6587590986 Grab up to 150 key shares of validators, not 150 validators 2023-10-12 22:44:10 -04:00
Luke Parker
b0fcdd3367 Regularly rebroadcast consensus messages to ensure presence even if dropped from the P2P layer
Attempts to fix #342, #382.
2023-10-12 22:14:42 -04:00
Luke Parker
15edea1389 Use an inner task to spawn Tributarys to minimize latency 2023-10-12 21:55:25 -04:00
Luke Parker
1d9e2efc33 Don't unwrap result of call which makes network requests 2023-10-12 18:49:49 -04:00
Luke Parker
f25f5cd368 Add a sleep statement to Batch publication errors to prevent log flooding/node hammering 2023-10-12 18:39:46 -04:00
Luke Parker
f847ac7077 Update to if-watch 3.1.0
Has a delta of -4 packages in tree.

Offers a potential to no longer have two sets of windows in-tree once packages
using 0.48 update to 0.51.
2023-10-12 18:37:32 -04:00
Luke Parker
29fcf6be4d Support immediate deallocations for non-active validators 2023-10-12 00:51:18 -04:00
Luke Parker
108e2b57d9 Add claim_deallocation to the staking pallet 2023-10-12 00:26:35 -04:00
Luke Parker
3da5577950 Only allow deallocations after the next set after the validator's inclusion starts, plus a one session cooldown period
Part of #394.
2023-10-11 23:42:15 -04:00
Luke Parker
f692047b8b Rename validators to select_validators 2023-10-11 17:23:09 -04:00
Luke Parker
2401266374 Replace mutate with get + set
I'm legitimately unsure why mutate doesn't work. Reading the impls, it should...
2023-10-11 02:11:44 -04:00
Luke Parker
ed90d1752a Minimal CI Attempts (#393)
* Add quotes around globs

* Don't remove current kernel

* Remove nvm due to nvme, go -> golang
2023-10-11 02:11:34 -04:00
Luke Parker
3261fde853 Remove homebrew from packages to remove 2023-10-11 01:19:04 -04:00
Luke Parker
7492adc473 Attempt to further minimize size of GH CI 2023-10-11 01:17:50 -04:00
Luke Parker
04f9a1fa31 Correct handling of InSet's keys 2023-10-11 01:05:48 -04:00
Luke Parker
13cbc99149 Properly define the on-chain handover protocol
The new key publishing `Batch`s is more than sufficient.

Also uses the correct key to verify the published `Batch`s authenticity.
2023-10-10 23:55:59 -04:00
Luke Parker
1a0b4198ba Correct the check for if we still need to set keys
The prior check had an edge case where once keys were pruned, it'd believe the
keys needed to be set ad infinitum.
2023-10-10 22:53:15 -04:00
Luke Parker
22371a6585 Also reinstall python3-pip 2023-10-10 22:34:19 -04:00
Luke Parker
b2d6a85ac0 Still remove sqlite3 to cause the majority of uninstalls 2023-10-10 22:31:07 -04:00
Luke Parker
44ca5e6520 Manually reinstall python3 after removing most packages 2023-10-10 22:29:25 -04:00
Luke Parker
83b7146e1a Specify shell when removing unused packages 2023-10-10 22:21:03 -04:00
Luke Parker
f193b896c1 Update to monero 0.18.3.1 2023-10-10 21:34:22 -04:00
Luke Parker
985795e99d Remove unused packages as part of build dependencies
The reproducible runtime test failed due to running out of space. If we have
multiple tests failing due to out of space, and all of our tests have these
unused, it makes sense just to always so uninstall.

Also extends the time limit of reproducible-runtime, as 2h has been hit a few
times before.
2023-10-10 21:09:45 -04:00
Luke Parker
9bf8c92325 Correct the coordinator tests
They assumed processor 0 had keys `i = 1`. Under the new validator-set code,
the first key is the one with the highest amount, In case of tie, the key (or
as of the last commit, a Blake hash) decides order.

This commit kludges in a mapping from processor index to assigned key index, no
longer assuming its value.
2023-10-10 17:12:47 -04:00
Luke Parker
ab5af57dae Fix a pair of bugs in SortedAllocations and add further documentation
We took with `amount`, not `prior`, allowing multiple presences in
SortedAllocations.

While spam is limited by the amount of nibbles in the amount, the key provides
a much larger space to abuse. Inserting a cryptographic hash prevents its use
for abuse.
2023-10-10 16:50:48 -04:00
akildemir
98190b7b83 Staking pallet (#373)
* initial staking pallet

* add staking pallet to runtime

* support session rotation for serai

* optimizations & cleaning

* fix deny

* add serai network to initial networks

* a few tweaks & comments

* fix some pr comments

* Rewrite validator-sets with logarithmic algorithms

Uses the fact the underlying DB is sorted to achieve sorting of potential
validators by stake.

Removes release of deallocated stake for now.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-10 06:53:24 -04:00
akildemir
2f45bba2d4 fix tendermint invalid commit (#392)
* fix conflicting commit msg signing vs verifying

* fmt
2023-10-10 06:32:04 -04:00
Luke Parker
30d0bad175 Add extra assert to coordinator 2023-10-09 23:38:39 -04:00
Steven Chang
b8abc1e3cc README.md: Add links to Reddit, Telegram and Website 2023-10-07 13:32:52 -04:00
Luke Parker
b2ed2e961c Correct rust version used in CI/orchestration 2023-10-05 18:24:21 -04:00
Luke Parker
9cdca1d3d6 Use the newly stabilized div_ceil
Sets a msrv of 1.73.0.
2023-10-05 14:28:03 -04:00
Luke Parker
4ee65ed243 Update nightly
Supersedes #387.
2023-10-03 01:34:15 -04:00
Luke Parker
aa59f53ead Correct the coordinator tests
They weren't updated with the past couple of commits.
2023-09-29 04:35:02 -04:00
Luke Parker
bd5491dfd5 Simply Coordinator/Processors::send by accepting impl Into *Message 2023-09-29 04:19:59 -04:00
Luke Parker
0eff3d9453 Add Batch messages from processor, verify Batchs published on-chain
Renames Update to SignedBatch.

Checks Batch equality via a hash of the InInstructions. That prevents needing
to keep the Batch in node state or TX introspect.
2023-09-29 03:51:01 -04:00
Luke Parker
0be567ff69 Remove a misplaced copy of a README which has been around for who knows how long 2023-09-29 00:33:14 -04:00
Luke Parker
83b3a5c31c Document how receiving a Processor message does indeed make its Tributary relevant 2023-09-28 20:09:17 -04:00
Luke Parker
4d1212ec65 Update the processor's tests re: Batch SignId key
key used to be empty. As part of implementing support for multisig rotation
into the coordinator, it was easiest to set the key to the substrate key. While
the coordinator and full stack tests were updated, the processor tests weren't.
This does that.
2023-09-27 23:32:29 -04:00
Luke Parker
7e27315207 Attempt to reduce full-stack CI disk usage 2023-09-27 21:00:32 -04:00
Luke Parker
aa1faefe33 Correct Message Queue log statements now that queues are per from-to pairs 2023-09-27 21:00:07 -04:00
Luke Parker
7d738a3677 Start moving Coordinator to a multi-Tributary model
Prior, we only supported a single Tributary per network, and spawned a task to
handled Processor messages per Tributary. Now, we handle Processor messages per
network, yet we still only supported a single Tributary in that handling
function.

Now, when we handle a message, we load the Tributary which is relevant. Once we
know it, we ensure we have it (preventing race conditions), and then proceed.

We do need work to check if we should have a Tributary, or if we're not
participating. We also need to check if a Tributary has been retired, meaning
we shouldn't handle any transactions related to them, and to clean up retired
Tributaries.
2023-09-27 20:49:02 -04:00
Luke Parker
4a32f22418 Use a proper transcript for Tributary scanner topics 2023-09-27 13:33:25 -04:00
Luke Parker
01a4b9e694 Remove unused_variables 2023-09-27 13:00:04 -04:00
Luke Parker
3b01d3039b Remove unused clippy lints from coordinator 2023-09-27 12:42:25 -04:00
Luke Parker
40b7bc59d0 Use dedicated Queues for each from-to pair
Prevents one Processor's message from halting the entire pipeline.
2023-09-27 12:20:57 -04:00
Luke Parker
269db1c4be Remove the "expected" next ID
It's an unnecessary extra layer better handled locally.
2023-09-27 11:13:55 -04:00
Luke Parker
90318d7214 Remove unnecessary TODO 2023-09-27 00:50:57 -04:00
Luke Parker
64d370ac11 Make publish_signed_transaction safe for out of order publications
This is a possibility under the new deterministic nonce scheme.

While there is a concern of us never creating a transaction with a nonce,
blocking everything, we should always create transactions. We'll always publish
preprocesses, and while we'll only publish shares if everyone else does, we
only allocate for shares once everyone else does.
2023-09-27 00:44:31 -04:00
Luke Parker
db8dc1e864 Spawn a task for Heartbeat responses, preventing it from holding up P2P handling 2023-09-27 00:10:37 -04:00
Luke Parker
086458d041 Txn for handling a processor message
handle_processor_messages function added to remove a very large block of nested
code.

MainDb cleaned to never be instantiated.
2023-09-27 00:00:31 -04:00
Luke Parker
2e0f8138e2 Update the coordinator to not handle a processor message multiple times 2023-09-26 23:28:05 -04:00
Luke Parker
32a9a33226 Adjust sync test timeout to resolve infreuqent failure
This isn't an unacceptable timeout. It matches a prior timeout. I'm unsure why
it's now needed to be extended though. My best guess is the test runtime is
single threaded and there's now new overhead in the task management (or perhaps
higher latency now that messages per-tributary is serialized).
2023-09-26 17:28:41 -04:00
Luke Parker
2508633de9 Add a next block notification system to Tributary
Also adds a loop missing from the prior commit.
2023-09-25 23:20:51 -04:00
Luke Parker
7312428a44 P2P task per Tributary, not per message 2023-09-25 22:58:40 -04:00
Luke Parker
e1801b57c9 Dedicated tasks per-Processor in coordinator
This isn't meaningful yet, as we still have serialized reading messages from
Processors, yet is a step closer.
2023-09-25 22:38:29 -04:00
Luke Parker
60491a091f Improve handling of tasks in coordinator, one per Tributary scanner 2023-09-25 20:33:14 -04:00
Luke Parker
9f3840d1cf Localize Tributary HashMaps, offering flexibility and removing contention 2023-09-25 19:28:53 -04:00
Luke Parker
7120bddc6f Move where we trigger DKGs for safety reasons 2023-09-25 18:27:16 -04:00
Luke Parker
77f7794452 Remove lazy_static for proper use of channels 2023-09-25 18:23:52 -04:00
Luke Parker
62a1a45135 Move where we create the readers to prevent only creating readers for present-at-boot tributaries
Also renames publish_transaction to publish_signed_transaction.
2023-09-25 18:07:29 -04:00
Luke Parker
0440e60645 Move heartbeat_tributaries from Tributary to TributaryReader
Reduces contention.
2023-09-25 17:15:38 -04:00
Luke Parker
4babf898d7 Implement deterministic nonces for Tributary transactions 2023-09-25 15:42:39 -04:00
Luke Parker
ca69f97fef Add support for multiple multisigs to the processor (#377)
* Design and document a multisig rotation flow

* Make Scanner::eventualities a HashMap so it's per-key

* Don't drop eventualities, always follow through on them

Technical improvements made along the way.

* Start creating an isolate object to manage multisigs, which doesn't require being a signer

Removes key from SubstrateBlock.

* Move Scanner/Scheduler under multisigs

* Move Batch construction into MultisigManager

* Clarify "should" in Multisig Rotation docs

* Add block_number to MultisigManager, as it controls the scanner

* Move sign_plans into MultisigManager

Removes ThresholdKeys from prepare_send.

* Make SubstrateMutable an alias for MultisigManager

* Rewrite Multisig Rotation

The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.

* Add mini

mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.

This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).

* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives

Technically, the prior commit had mini prove the race condition.

The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.

The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.

In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).

The docs will be updated next, to reflect this.

* Fix Multisig Rotation

I believe this is finally good enough to be final.

1) Fixes the race condition present in the prior document, as demonstrated by
mini.

`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.

2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.

3) Removes liquidity fragmentation by tightening flow/handling of latency.

4) Several clarifications and documentation of reasoning.

5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.

* Clarify terminology in mini

Synchronizes it from my original thoughts on potential schema to the design
actually created.

* Remove most of processor's README for a reference to docs/processor

This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.

* Update scanner TODOs in line with new docs

* Correct documentation on Bitcoin::Block::time, and Block::time

* Make the scanner in MultisigManager no longer public

* Always send ConfirmKeyPair, regardless of if in-set

* Cargo.lock changes from a prior commit

* Add a policy document on defining a Canonical Chain

I accidentally committed a version of this with a few headers earlier, and this
is a proper version.

* Competent MultisigManager::new

* Update processor's comments

* Add mini to copied files

* Re-organize Scanner per multisig rotation document

* Add RUST_LOG trace targets to e2e tests

* Have the scanner wait once it gets too far ahead

Also bug fixes.

* Add activation blocks to the scanner

* Split received outputs into existing/new in MultisigManager

* Select the proper scheduler

* Schedule multisig activation as detailed in documentation

* Have the Coordinator assert if multiple `Batch`s occur within a block

While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.

While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.

* Schedule payments to the proper multisig

* Correct >= to <

* Use the new multisig's key for change on schedule

* Don't report External TXs to prior multisig once deprecated

* Forward from the old multisig to the new one at all opportunities

* Move unfulfilled payments in queue from prior to new multisig

* Create MultisigsDb, splitting it out of MainDb

Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.

The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.

* Don't check Scanner-emitted completions, trust they are completions

Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.

* Fix a possible panic in the processor

A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.

* Document why an existing TODO isn't valid

* Change when we drop payments for being to the change address

The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.

* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions

* When closing, drop External/Branch outputs which don't cause progress

* Properly decide if Change outputs should be forward or not when closing

This completes all code needed to make the old multisig have a finite lifetime.

* Commentary on forwarding schemes

* Provide a 1 block window, with liquidity fragmentation risks, due to latency

On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.

Also updates the Multisig Rotation document with the new forwarding plan.

* Implement transaction forwarding from old multisig to new multisig

Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.

* Only let Branch outputs fulfill branches

* Update TODOs

* Move the location of handling signer events to avoid a race condition

* Avoid a deadlock by using a RwLock on a single txn instead of two txns

* Move Batch ID out of the Scanner

* Increase from one block of latency on new keys activation to two

For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.

Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.

* Split MultisigManager::substrate_block into multiple functions

* Further tweaks to substrate_block

* Acquire a lock on all Scanner operations after calling ack_block

Gives time to call register_eventuality and initiate signing.

* Merge sign_plans into substrate_block

Also ensure the Scanner's lock isn't prematurely released.

* Use a HashMap to pass to-be-forwarded instructions, not the DB

* Successfully determine in ClosingExisting

* Move from 2 blocks of latency when rotating to 10 minutes

Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.

* Add note justifying measuring time in blocks when rotating

* Implement delaying of outputs received early to the new multisig per specification

* Documentation on why Branch outputs don't have the race condition concerns Change do

Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.

* Remove TODO re: sanity checking Eventualities

We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.

* Add TODO(now) for TODOs which must be done in this branch

Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.

* Correct errors in potential/future flow descriptions

* Accept having a single Plan Vec

Per the following code consuming it, there's no benefit to bifurcating it by
key.

* Only issue sign_transaction on boot for the proper signer

* Only set keys when participating in their construction

* Misc progress

Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.

Only immediately sets substrate_signer if session is 0.

On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.

* Correctly detect and set retirement block

Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.

Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).

Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.

* Remove deadlock in multisig_completed and document alternative

The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?

* Handle the final step of retirement, dropping the old key and setting new to existing

* Remove TODO about emitting a Block on every step

If we emit on NewAsChange, we lose the purpose of the NewAsChange period.

The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.

* Add TODO about potentially not emitting a Block event for the reitrement block

* Restore accidentally deleted CI file

* Pair of slight tweaks

* Add missing if statement

* Disable an assertion when testing

One of the test flows currently abuses the Scanner in a way triggering it.
2023-09-25 09:48:15 -04:00
Luke Parker
fe19e8246e cargo update
Updates past the yanked rustls-webpki, reduces tree size by one.
2023-09-24 08:31:13 -04:00
Luke Parker
c62d9b448f Use a Vec for the Monero generators, preventing its massive stack usage
The amount of stack usage did cause issues on m1 computers.
2023-09-20 04:31:16 -04:00
Luke Parker
98ab6acbd5 cargo update, removing 5 items from tree 2023-09-20 04:30:46 -04:00
Luke Parker
092f17932a Document requirement on rootless Docker 2023-09-19 12:59:04 -04:00
Luke Parker
e455332e01 Resolve #369 2023-09-19 12:55:30 -04:00
Luke Parker
a9468bf355 Pin CI from stable to 1.72.1
Enables better detection of regressions in Rust, a few of which 1.72.1 fixes.
2023-09-19 11:43:21 -04:00
Luke Parker
3d464c4736 apt update before install in workflow 2023-09-15 14:54:55 -04:00
Luke Parker
142552f024 Correct workflows with missing toolchain annotations 2023-09-15 14:36:47 -04:00
Luke Parker
e3a7ee4927 Pin to exact GH actions, preventing ACE in CI
Also updates actions.
2023-09-15 14:30:18 -04:00
Luke Parker
9eaaa7d2e8 Document H1's mismatch between the FROST preprint and IETF draft 2023-09-15 14:16:15 -04:00
Luke Parker
8adef705c3 Update wasmtime due to CVE-2023-41880 2023-09-15 14:06:39 -04:00
Luke Parker
3fd6d45b3e Use base58-monero 2, removing a git dependency 2023-09-15 13:59:29 -04:00
Luke Parker
d263413e36 Fixes for schnorrkel/dalek updates 2023-09-12 10:02:20 -04:00
Luke Parker
6f8a5d0ede Sane char_le_bits 2023-09-12 09:37:48 -04:00
Luke Parker
24bdd7ed9b Bump dalek-ff-group version
Prior commit fixed random, which could generate points outside of the prime
subgroup.
2023-09-12 09:00:42 -04:00
Luke Parker
aa724c06bc Start relying on curve25519-dalek's group feature
Removes git dependency for schnorrkel as well, now that schnorrkel has updated.
2023-09-12 08:56:30 -04:00
Luke Parker
1e6655408e cargo update
Bites the bullet on ethers 2.0.9 (now 2.0.10).
2023-09-12 07:47:03 -04:00
Luke Parker
9ab43407e1 Ignore NewSet events for Serai in the coordinator
The coordinator has nothing to do in this case.
2023-09-08 09:55:19 -04:00
Luke Parker
7ac0de3a8d Correct binding properties of Bitcoin eventuality
Eventualities need to be binding not just to a plan, yet to the execution of
the plan (the outputs). Bitcoin's Eventuality definition short-cutted this
under a honest multisig assumption, causing the following issue:

If multisig n+1 is verifying multisig n's actions, as detailed in
multi-multisig's document on multisig rotation, it'll check no outstanding
eventualities exist. If we solely bind to the plan, a malicious multisig n
could steal outbound payments yet cause the plan to be marked as successfully
completed.

By modifying the eventuality to also include the expected outputs, this is no
longer possible. Binding to the expected input is preserved in order to remain
binding to the plan (allowing two plans with the same output-set to co-exist).
2023-09-08 05:21:18 -04:00
Luke Parker
06a6cd29b0 Set nodelay on coordinator's P2P sockets 2023-09-06 22:57:33 -04:00
Luke Parker
2472ec7ba8 Don't attempt parsing truncated InInstructions 2023-09-02 17:18:04 -04:00
Luke Parker
69c3fad7ce cargo fmt 2023-09-02 16:32:42 -04:00
Luke Parker
bd9a05feef Document UTXO solvency modeling 2023-09-02 16:11:01 -04:00
Luke Parker
7d8e08d5b4 Use scale instead of bincode throughout processor-messages/processor DB
scale is canonical, bincode is not.
2023-09-02 07:54:09 -04:00
Luke Parker
f7e49e1f90 Update Rust nightly
Supersedes #368.

Adds exceptions for unwrap_or_default due to preference against Default's
ambiguity.
2023-09-02 01:24:09 -04:00
Luke Parker
cd4c3a6c88 Correct publication of Completed Tributary TXs 2023-09-02 00:50:54 -04:00
Luke Parker
fddc605c65 Define a proper Topic (Zone + ID)
Removes the inefficiency of recognized_topic for attempt returning None if the
topic isn't recognized.
2023-09-01 01:21:15 -04:00
Luke Parker
2ad6b38be9 Prefix root keys in coordinator with "coordinator" to prevent conflicts with tributary 2023-09-01 01:00:24 -04:00
Luke Parker
fda90e23c9 Reduce and clarify data flow in Tributary scanner 2023-09-01 00:59:10 -04:00
Luke Parker
3f3f6b2d0c Properly route attempt around DkgConfirmer 2023-09-01 00:16:43 -04:00
Luke Parker
fa8ff62b09 Remove sender_i from DkgShares
It was a piece of duplicated data used to achieve context-less
de)serialization. This new Vec code is a bit tricker to first read, yet overall
clean and removes a potential fault.

Saves 2 bytes from DkgShares messages.
2023-09-01 00:03:56 -04:00
Luke Parker
5113ab9ec4 Move SignCompleted to Unsigned to cause de-duplication amongst honest validators 2023-08-31 23:39:36 -04:00
Luke Parker
9b7cb688ed Have Batch contain Block and batch ID, ensuring eclipsed validators don't publish invalid shares
See prior commit message for more info.

With the plan for the batch sign ID to be just 5 bytes (potentially 4), this
does incur a +5 bytes cost compared to the ExternalBlock system *even in the
standard case*. The simplicity remains preferred at this time.
2023-08-31 23:04:39 -04:00
Luke Parker
9a5f8fc5dd Replace ExternalBlock with Batch
The initial TODO was simply to use one ExternalBlock per all batches in the
block. This would require publishing ExternalBlock after the last batch,
requiring knowing the last batch. While we could add such a pipeline, it'd
require:

1) Initial preprocesses using a distinct message from BatchPreprocess
2) An additional message sent after all BatchPreprocess are sent

Unfortunately, both would require tweaks to the SubstrateSigner which aren't
worth the complexity compared to the solution here, at least, not at this time.

While this will cause, if a Tributary is signing a block whose total batch data
exceeds 25 kB, to use multiple transactions which could be optimized out by
'better' local data pipelining, that's an extreme edge case. Given the temporal
nature of each Tributary, it's also an acceptable edge.

This does no longer achieve synchrony over external blocks accordingly. While
signed batches have synchrony, as they embed their block hash, batches being
signed don't have cryptographic synchrony on their contents. This means
validators who are eclipsed may produce invalid shares, as they sign a
different batch. This will be introduced in a follow-up commit.
2023-08-31 23:00:25 -04:00
Luke Parker
2dc35193c9 Handle batch n+1 being signed before batch n is 2023-08-31 22:09:34 -04:00
Luke Parker
9bf24480f4 Spawn an async test per P2P message to try and resolve latency issues 2023-08-31 02:35:50 -04:00
Luke Parker
3af9dc5d6f Tweak Heartbeat configuration so LibP2P can be expected to deliver messages within latency window 2023-08-31 01:33:52 -04:00
Luke Parker
148bc380fe Add assert on edge case requiring a validator with 34% and a broken invariant 2023-08-31 01:08:40 -04:00
Luke Parker
8dad62f300 Fix panic when post-verifying Precommits in log
Notes an edge case which enables invalid commit production.
2023-08-31 01:02:57 -04:00
Luke Parker
1e79de87e8 Remove contention between LibP2p spawned task and consumers via channels 2023-08-30 23:31:09 -04:00
Luke Parker
2f57a69cb6 Define BLOCK_PROCESSING_TIME, LATENCY_TIME in ms
Updates Tributary values to allow 999ms for block processing (from 2s) and
1667ms for latency (up from 1s).

The intent is to resolve #365. I don't know if this will, but it increases the
chances of success and these values should be fine in prod since Tributary is a
post-execution chain (making block procesisng time minimal).

Does embed the dagger of N::block_time() panicking if the block time in ms
doesn't cleanly divide by 1000.
2023-08-30 22:58:42 -04:00
Luke Parker
493a222421 Use a timeout in case the JSON-RPC notifications have unexpected behavior 2023-08-30 17:57:33 -04:00
Luke Parker
d5a19eca8c Add a notification system for finalizations to serai-client, use in coordinator 2023-08-30 17:25:04 -04:00
Luke Parker
e9fca37181 Intent based de-duplication in MessageQueue 2023-08-29 17:05:01 -04:00
Luke Parker
83c25eff03 Remove no longer necessary async from monero SignatableTransaction::sign 2023-08-29 16:20:21 -04:00
Luke Parker
285422f71a Add a full-stack mint and burn test for Bitcoin and Monero
Fixes where ram_scanned is updated in processor. The prior version, while safe,
would redo massive amounts of work during periods of inactivity. It also hit an
undocumented invariant where get_eventuality_completions assumes new blocks,
yet redone work wouldn't have new blocks.

Modifies Monero's generate_blocks to return the hashes of the generated blocks.
2023-08-28 21:17:22 -04:00
Luke Parker
1838c37ecf Full stack test framework 2023-08-27 18:37:12 -04:00
Luke Parker
a3649b2062 Move where we trigger KeyGen to avoid a race condition
We only expect processor messages when we have the relevant Tributary. We
queued Tributary creation, yet then kicked off processor messages. We need to
wait until the Tributary is actually created to kick off processor messages.
2023-08-27 18:23:45 -04:00
Luke Parker
1bd14163a0 Log all output to console when running in CI
Attempts to debug https://github.com/serai-dex/serai/actions/runs/5992819682/job/16252622433,
which seems to be a legitimate test failure.
2023-08-27 17:51:36 -04:00
Luke Parker
89a6ee9290 Silence warning when building in release 2023-08-27 15:39:09 -04:00
Luke Parker
a66994aade Use FCMP implementation of BP+ in monero-serai (#344)
* Add in an implementation of BP+ based off the paper, intended for clarity and review

This was done as part of my work on FCMPs from Monero, and is copied from https://github.com/kayabaNerve/full-chain-membership-proofs

* Remove crate structure of BP+

* Remove arithmetic circuit code

* Remove AC/VC generators code

* Remove generator transcript

Monero uses non-transcripted static generators.

* Further trimming of generators

* Remove the single range proof

It's unused by Monero and accordingly unhelpful.

* Work on getting BP+ to compile in its new env

* Correct BP+ folder name

* Further tweaks to get closer to compiling

* Remove the ScalarMatrix file

It's only used for AC proofs

* Compiles, with tests passing

* Lock BP+ to Ed25519 instead of the generic Ciphersuite

* Resolve most warnings in BP+

* Make existing bulletproofs test easier to read

* Further strip generators

* Swap G/H as Monero did

* Replace RangeCommitment with Commitment

* Hard-code BP+ h to Ed25519's generator

* Use pub(crate) for BP+, not pub

* Replace initial_transcript with hash_plus

* Rename hash_plus to initial_transcript

* Finish integrating the FCMP BP+ impl

* Move BP+ folder

* Correct no-std support

* Rename "long_n" to eta

* Add note on non-prime order dfg points
2023-08-27 15:33:17 -04:00
Luke Parker
34ffd2fa76 Run coordinator e2e tests one at a time due to usage of mdns causing cross-talk 2023-08-27 05:40:36 -04:00
Luke Parker
775353f8cd Add testing for Completed to coordinator e2e tests
Two commits ago stubbed in support, sufficient for a honest-validator
environment.
2023-08-27 05:34:04 -04:00
Luke Parker
c9b2490ab9 Tweak tributary_test to handle a one-block variance
Prior to the previous commit, whatever async scheduling occurred caused them to
all have the same tip. Now, some are one block ahead of others. This adds
tolerance for that, as it's an acceptable variance, so long as it's solely one
block.
2023-08-27 05:28:36 -04:00
Luke Parker
2db53d5434 Use &self for handle_message and sync_block in Tributary
They used &mut self to prevent execution at the same time. This uses a lock
over the channel to achieve the same security, without requiring a lock over
the entire tributary.

This fixes post-provided Provided transactions. sync_block waited for the TX to
be provided, yet it never would as sync_block held a mutable reference over the
entire Tributary, preventing any other read/write operations of any scope.

A timeout increased (bc2f23f72b) due to this bug
not being identified has been decreased back, thankfully.

Also shims in basic support for Completed, which was the WIP before this bug
was identified.
2023-08-27 05:07:11 -04:00
Luke Parker
6268bbd7c8 Replace "connected" with "external" in Processor doc 2023-08-27 01:57:17 -04:00
Luke Parker
bc2f23f72b Further increase GH timeout in response to https://github.com/serai-dex/serai/actions/runs/5988202109/job/16243154650
This should be egregious unless the GitHub CI is so inperformant it's breaking
Tendermint consensus's synchrony expectations, which likely points to our own
code being unviable.

This solely serves as an immediate fix to the problem, not a justification of
the unevaluated performance.
2023-08-27 01:26:42 -04:00
Luke Parker
72337b17f5 Test the Sign flow across the Coordinators 2023-08-26 21:45:53 -04:00
Luke Parker
36b193992f Correct handling of known signers in batch test
Putting a signer first doesn't work because signers can only publish once a
supermajority sync. Now, the code uses an excluded signer (instead of an
included signer) to determine signing set.

Further simplifications are available. Also adds accurate documentation on
latency/sleep reasoning.
2023-08-26 21:38:58 -04:00
Luke Parker
37b6de9c0c Add SRI balanace/transfer functions to serai_client 2023-08-26 21:37:54 -04:00
Luke Parker
22f3c9e58f Stop trying to publish a Batch if another node does 2023-08-26 21:37:21 -04:00
Luke Parker
9adefa4c2c Add code to handle a race condition around first_preprocess 2023-08-26 21:35:43 -04:00
Luke Parker
c245bcdc9b Extend Batch test with SubstrateBlock/arb Batch signing 2023-08-25 21:37:36 -04:00
Luke Parker
f249e20028 Route KeyPair so Tributary can construct SignId as needed 2023-08-25 18:37:22 -04:00
Luke Parker
3745f8b6af Increase GitHub CI timeouts for the coordinator tests 2023-08-25 14:09:44 -04:00
Luke Parker
ba46aa76f0 Update libp2p
Adds 17 new crates, which I'm extremely unhappy about. Unfortunately, it's
needed to resolve a security issue (RUSTSEC-2023-0052) and is inevitable.

Closes #355.
2023-08-25 00:01:58 -04:00
Luke Parker
8c1d8a2658 Only emit Preprocesses/Shares when participating 2023-08-24 23:50:19 -04:00
Luke Parker
2702384c70 Coordinator Batch signing test 2023-08-24 23:48:50 -04:00
Luke Parker
d3093c92dc Uncontroversial cargo update 2023-08-24 22:06:08 -04:00
Luke Parker
32df302cc4 Move recognized_id from a channel to an async lambda
Fixes a race condition. Also fixes recognizing batch IDs.
2023-08-24 21:55:59 -04:00
Luke Parker
ea8e26eca3 Use an empty key for Batch's SignId 2023-08-24 20:39:34 -04:00
Luke Parker
bccdabb53d Use a single Substrate signer, per intentions in #227
Removes key from Update as well, since it's no longer variable.
2023-08-24 20:30:50 -04:00
Luke Parker
b91bd44476 Support multiple batches per block by the coordinator
Also corrects an assumption block hash == batch ID.
2023-08-24 19:13:18 -04:00
Luke Parker
dc2656a538 Don't bind to the entire batch, solely the network and ID
This avoids needing to know the Batch in advance, avoiding a race condition.
2023-08-24 18:52:33 -04:00
Luke Parker
67109c648c Use an actual, cryptographically-binding ID for batches in SignId
The intent system expected one.
2023-08-24 18:44:09 -04:00
Luke Parker
61418b4e9f Update Update and substrate_signers to [u8; 32] from Vec<u8>
A commit made while testing moved them from network-key-indexed to
Substrate-key-indexed. Since Substrate keys have a fixed-length, fitting within
the Copy boundary, there's no reason for it to not use an array.
2023-08-24 13:24:56 -04:00
Luke Parker
506ded205a Misc cargo update
Removes 5 crates (presumably duplicated versions).
2023-08-23 13:24:21 -04:00
Luke Parker
9801f9f753 Update to latest Serai substrate (removes statement store) 2023-08-23 13:21:17 -04:00
Luke Parker
8a4ef3ddae jsonrpsee 0.16.3
Removes 3 crates from tree. Now RUSTSEC-2023-0053 is only held up by a lack of
libp2p-websocket release (https://github.com/libp2p/rust-libp2p/pull/4378).
2023-08-23 13:18:51 -04:00
Luke Parker
bfb5401336 Handle v1 Monero TXs spending v2 outputs
Also tightens read/fixes a few potential panics.
2023-08-22 23:26:59 -04:00
Luke Parker
f2872a2e07 Silence RUSTSEC tracked by https://github.com/serai-dex/serai/355
Since it isn't resolvable immediately, it's better to track it than effectively
neuter the deny job (as it would always error).
2023-08-22 22:15:13 -04:00
Luke Parker
c739f00d0b cargo update
1) Updates to time 0.3.27, which allows modern serde_derive's (post binary
fiasco)
2) Updates rustls-webpki to a version not affected by RUSTSEC-2023-0053
3) Updates wasmtime to 12
(d5923df083)
2023-08-22 22:06:06 -04:00
Luke Parker
310a09b5a4 clippy fix 2023-08-22 01:00:18 -04:00
Luke Parker
c65bb70741 Remove SlashVote per #349 2023-08-21 23:48:33 -04:00
Luke Parker
718c44d50e cargo update
One update replaces one package with another.

The Substrate pulls get ed25519-zebra (still in tree) to dalek 4.0. While noise
still pulls in 3.2, for now, this does let us drop one crate.
2023-08-21 22:58:45 -04:00
Luke Parker
1f45c2c6b5 cargo fmt 2023-08-21 10:40:10 -04:00
Luke Parker
76a30fd572 Support no-std builds of bitcoin-serai
Arguably not meaningful, as it adds the scanner yet not the RPC, and no signing
code since modular-frost doesn't support no-std yet. It's a step in the right
direction though.
2023-08-21 08:56:37 -04:00
Luke Parker
a52c86ad81 Don't label Monero nodes invalid for returning invalid keys in outputs
Only recently (I believe the most recent HF) were output keys checked to be
valid. This means returned keys may be invalid points, despite being the
legitimate keys for the specified outputs.

Does still label the node as invalid if it doesn't return 32 bytes,
hex-encoded.
2023-08-21 03:07:20 -04:00
Luke Parker
108504d6e2 Increase MAX_ADDRESS_LEN
In response to
https://gist.github.com/tevador/50160d160d24cfc6c52ae02eb3d17024?permalink_comment_id=4665372#gistcomment-4665372
2023-08-21 02:56:10 -04:00
Luke Parker
27cd2ee2bb cargo fmt 2023-08-21 02:38:27 -04:00
Luke Parker
dc88b29b92 Add keep-alive timeout to coordinator
The Heartbeat was meant to serve for this, yet no Heartbeats are fired when we
don't have active tributaries.

libp2p does offer an explicit KeepAlive protocol, yet it's not recommended in
prod. While this likely has the same pit falls as LibP2p's KeepAlive protocol,
it's at least tailored to our timing.
2023-08-21 02:36:03 -04:00
Luke Parker
45ea805620 Use an optimistic (and twice as long in the worst case) sleep for KeyGen
Possible due to Substrate having an RPC.
2023-08-21 02:31:22 -04:00
Luke Parker
1e68cff6dc Bump bitcoin-serai to 0.3.0 for publication 2023-08-21 01:26:54 -04:00
Luke Parker
906d3b9a7c Merge pull request #348 from serai-dex/current-crypto-crates
Current crypto crates
2023-08-21 01:24:16 -04:00
akildemir
e319762c69 use half-aggregation for tm messages (#346)
* dalek 4.0

* cargo update

Moves to a version of Substrate which uses curve25519-dalek 4.0 (not a rc).
Doesn't yet update the repo to curve25519-dalek 4.0 (as a branch does) due
to the official schnorrkel using a conflicting curve25519-dalek. This would
prevent installation of frost-schnorrkel without a patch.

* use half-aggregation for tm messages

* fmt

* fix pr comments

* cargo update

Achieves three notable updates.

1) Resolves RUSTSEC-2022-0093 by updating libp2p-identity.
2) Removes 3 old rand crates via updating ed25519-dalek (a dependency of
libp2p-identity).
3) Sets serde_derive to 1.0.171 via updating to time 0.3.26 which pins at up to
1.0.171.

The last one is the most important. The former two are niceties.

serde_derive, since 1.0.171, ships a non-reproducible binary blob in what's a
complete compromise of supply chain security. This is done in order to reduce
compile times, yet also for the maintainer of serde (dtolnay) to leverage
serde's position as the 8th most downloaded crate to attempt to force changes
to the Rust build pipeline.

While dtolnay's contributions to Rust are respectable, being behind syn, quote,
and proc-macro2 (the top three crates by downloads), along with thiserror,
anyhow, async-trait, and more (I believe also being part of the Rust project),
they have unfortunately decided to refuse to listen to the community on this
issue (or even engage with counter-commentary). Given their political agenda
they seem to try to be accomplishing with force, I'd go as far as to call their
actions terroristic (as they're using the threat of the binary blob as
justification for cargo to ship 'proper' support for binary blobs).

This is arguably representative of dtolnay's past work on watt. watt was a wasm
interpreter to execute a pre-compiled proc macro. This would save the compile
time of proc macros, yet sandbox it so a full binary did not have to be run.

Unfortunately, watt (while decreasing compile times) fails to be a valid
solution to supply chain security (without massive ecosystem changes). It never
implemented reproducible builds for its wasm blobs, and a malicious wasm blob
could still fundamentally compromise a project. The only solution for an end
user to achieve a secure pipeline would be to locally build the project,
verifying the blob aligns, yet doing so would negate all advantages of the
blob.

dtolnay also seems to be giving up their role as a FOSS maintainer given that
serde no longer works in several environments. While FOSS maintainers are not
required to never implement breaking changes, the version number is still 1.0.
While FOSS maintainers are not required to follow semver, releasing a very
notable breaking change *without a new version number* in an ecosystem which
*does follow semver*, then refusing to acknowledge bugs as bugs with their work
does meet my personal definition of "not actively maintaining their existing
work". Maintenance would be to fix bugs, not introduce and ignore.

For now, serde > 1.0.171 has been banned. In the future, we may host a fork
without the blobs (yet with the patches). It may be necessary to ban all of
dtolnay's maintained crates, if they continue to force their agenda as such,
yet I hope this may be resolved within the next week or so.

Sources:

https://github.com/serde-rs/serde/issues/2538 - Binary blob discussion

This includes several reports of various workflows being broken.

https://github.com/serde-rs/serde/issues/2538#issuecomment-1682519944

dtolnay commenting that security should be resolved via Rust toolchain edits,
not via their own work being secure. This is why I say they're trying to
leverage serde in a political game.

https://github.com/serde-rs/serde/issues/2526 - Usage via git broken

dtolnay explicitly asks the submitting user if they'd be willing to advocate
for changes to Rust rather than actually fix the issue they created. This is
further political arm wrestling.

https://github.com/serde-rs/serde/issues/2530 - Usage via Bazel broken

https://github.com/serde-rs/serde/issues/2575 - Unverifiable binary blob

https://github.com/dtolnay/watt - dtolnay's prior work on precompilation

* add Rs() api to  SchnorrAggregate

* Correct serai-processor-tests to dalek 4

* fmt + deny

* Slash malevolent validators  (#294)

* add slash tx

* ignore unsigned tx replays

* verify that provided evidence is valid

* fix clippy + fmt

* move application tx handling to another module

* partially handle the tendermint txs

* fix pr comments

* support unsigned app txs

* add slash target to the votes

* enforce provided, unsigned, signed tx ordering within a block

* bug fixes

* add unit test for tendermint txs

* bug fixes

* update tests for tendermint txs

* add tx ordering test

* tidy up tx ordering test

* cargo +nightly fmt

* Misc fixes from rebasing

* Finish resolving clippy

* Remove sha3 from tendermint-machine

* Resolve a DoS in SlashEvidence's read

Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.

* Make lazy_static a dev-depend for tributary

* Various small tweaks

One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).

The extra protection added here sorts signed, then concats.

* Fix Tributary tests I broke, start review on tendermint/tx.rs

* Finish reviewing everything outside tests and empty_signature

* Remove empty_signature

empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.

We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.

This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.

* Remove async_sequential for tokio::test

There was no competition for resources forcing them to be run sequentially.

* Modify block order test to be statistically significant without multiple runs

* Clean tests

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>

* Add DSTs to Tributary TX sig_hash functions

Prevents conflicts with other systems/other parts of the Tributary.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-21 01:22:00 -04:00
Luke Parker
f161135119 Add coins/bitcoin audit by Cypher Stack 2023-08-21 01:20:09 -04:00
Luke Parker
d2a0ff13f2 Merge branch 'bitcoin-audit' into develop 2023-08-21 01:16:50 -04:00
Luke Parker
498aa45619 Ban only versions of serde with binary blobs
Does downgrade time from 0.3.26 to 0.3.25 due to time banning > 1.0.171.
Hopefully that's also relaxed soon...
2023-08-21 00:57:22 -04:00
akildemir
39ce819876 Slash malevolent validators (#294)
* add slash tx

* ignore unsigned tx replays

* verify that provided evidence is valid

* fix clippy + fmt

* move application tx handling to another module

* partially handle the tendermint txs

* fix pr comments

* support unsigned app txs

* add slash target to the votes

* enforce provided, unsigned, signed tx ordering within a block

* bug fixes

* add unit test for tendermint txs

* bug fixes

* update tests for tendermint txs

* add tx ordering test

* tidy up tx ordering test

* cargo +nightly fmt

* Misc fixes from rebasing

* Finish resolving clippy

* Remove sha3 from tendermint-machine

* Resolve a DoS in SlashEvidence's read

Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.

* Make lazy_static a dev-depend for tributary

* Various small tweaks

One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).

The extra protection added here sorts signed, then concats.

* Fix Tributary tests I broke, start review on tendermint/tx.rs

* Finish reviewing everything outside tests and empty_signature

* Remove empty_signature

empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.

We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.

This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.

* Remove async_sequential for tokio::test

There was no competition for resources forcing them to be run sequentially.

* Modify block order test to be statistically significant without multiple runs

* Clean tests

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-21 00:28:23 -04:00
Luke Parker
8973eb8ac4 fmt + deny 2023-08-20 00:14:53 -04:00
Luke Parker
34397b31b1 Correct serai-processor-tests to dalek 4 2023-08-19 16:34:27 -04:00
Luke Parker
96583da3b9 cargo update
Achieves three notable updates.

1) Resolves RUSTSEC-2022-0093 by updating libp2p-identity.
2) Removes 3 old rand crates via updating ed25519-dalek (a dependency of
libp2p-identity).
3) Sets serde_derive to 1.0.171 via updating to time 0.3.26 which pins at up to
1.0.171.

The last one is the most important. The former two are niceties.

serde_derive, since 1.0.171, ships a non-reproducible binary blob in what's a
complete compromise of supply chain security. This is done in order to reduce
compile times, yet also for the maintainer of serde (dtolnay) to leverage
serde's position as the 8th most downloaded crate to attempt to force changes
to the Rust build pipeline.

While dtolnay's contributions to Rust are respectable, being behind syn, quote,
and proc-macro2 (the top three crates by downloads), along with thiserror,
anyhow, async-trait, and more (I believe also being part of the Rust project),
they have unfortunately decided to refuse to listen to the community on this
issue (or even engage with counter-commentary). Given their political agenda
they seem to try to be accomplishing with force, I'd go as far as to call their
actions terroristic (as they're using the threat of the binary blob as
justification for cargo to ship 'proper' support for binary blobs).

This is arguably representative of dtolnay's past work on watt. watt was a wasm
interpreter to execute a pre-compiled proc macro. This would save the compile
time of proc macros, yet sandbox it so a full binary did not have to be run.

Unfortunately, watt (while decreasing compile times) fails to be a valid
solution to supply chain security (without massive ecosystem changes). It never
implemented reproducible builds for its wasm blobs, and a malicious wasm blob
could still fundamentally compromise a project. The only solution for an end
user to achieve a secure pipeline would be to locally build the project,
verifying the blob aligns, yet doing so would negate all advantages of the
blob.

dtolnay also seems to be giving up their role as a FOSS maintainer given that
serde no longer works in several environments. While FOSS maintainers are not
required to never implement breaking changes, the version number is still 1.0.
While FOSS maintainers are not required to follow semver, releasing a very
notable breaking change *without a new version number* in an ecosystem which
*does follow semver*, then refusing to acknowledge bugs as bugs with their work
does meet my personal definition of "not actively maintaining their existing
work". Maintenance would be to fix bugs, not introduce and ignore.

For now, serde > 1.0.171 has been banned. In the future, we may host a fork
without the blobs (yet with the patches). It may be necessary to ban all of
dtolnay's maintained crates, if they continue to force their agenda as such,
yet I hope this may be resolved within the next week or so.

Sources:

https://github.com/serde-rs/serde/issues/2538 - Binary blob discussion

This includes several reports of various workflows being broken.

https://github.com/serde-rs/serde/issues/2538#issuecomment-1682519944

dtolnay commenting that security should be resolved via Rust toolchain edits,
not via their own work being secure. This is why I say they're trying to
leverage serde in a political game.

https://github.com/serde-rs/serde/issues/2526 - Usage via git broken

dtolnay explicitly asks the submitting user if they'd be willing to advocate
for changes to Rust rather than actually fix the issue they created. This is
further political arm wrestling.

https://github.com/serde-rs/serde/issues/2530 - Usage via Bazel broken

https://github.com/serde-rs/serde/issues/2575 - Unverifiable binary blob

https://github.com/dtolnay/watt - dtolnay's prior work on precompilation
2023-08-19 01:50:38 -04:00
Luke Parker
34c6974311 Merge branch 'dalek-4.0' into develop 2023-08-17 02:00:36 -04:00
Luke Parker
5a4db3efad cargo update
Moves to a version of Substrate which uses curve25519-dalek 4.0 (not a rc).
Doesn't yet update the repo to curve25519-dalek 4.0 (as a branch does) due
to the official schnorrkel using a conflicting curve25519-dalek. This would
prevent installation of frost-schnorrkel without a patch.
2023-08-17 01:25:50 -04:00
Luke Parker
28399b0310 cargo update 2023-08-15 05:39:23 -04:00
Luke Parker
426e89d6fb Extend sleeps of coordinator CI tests
The CI does now get past the first check to the second one, hence the addition
of similar sleeps to the second and so on checks.
2023-08-15 05:31:06 -04:00
Luke Parker
4850376664 Try to fix coordinator CI by sleeping longer when in CI 2023-08-14 16:02:14 -04:00
Luke Parker
f366d65d4b Add RUST_BACKTRACE=1 to .github
Should help resolve the coordinator tests, which are failing in the current CI.
Presumably a timeout which is too low for the CI's slower runners.
2023-08-14 11:59:26 -04:00
akildemir
e680eabb62 Improve batch handling (#316)
* restrict batch size to ~25kb

* add batch size check to node

* rate limit batches to 1 per serai block

* add support for multiple batches for block

* fix review comments

* Misc fixes

Doesn't yet update tests/processor until data flow is inspected.

* Move the block from SignId to ProcessorMessage::BatchPreprocesses

* Misc clean up

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-14 11:57:38 -04:00
Luke Parker
a3441a6871 Don't have publish return the 'hash' 2023-08-14 08:18:19 -04:00
Luke Parker
9895ec0f41 Add note to processor 2023-08-14 06:54:34 -04:00
Luke Parker
666bb3e96b E2E test coordinator KeyGen 2023-08-14 06:54:17 -04:00
Luke Parker
acc19e2817 Stop attempting to call set_keys if another validator does
This prevents this function from hanging ad-infinitum.
2023-08-14 06:53:23 -04:00
Luke Parker
5e02f936e4 Perform MuSig signing of generated keys 2023-08-14 06:08:55 -04:00
Luke Parker
6b41c91dc2 Correct TendermintMachine sleep on construction
For logging purposes, I added code to handle negative time till start. I forgot
to only sleep on positive time till start.

Should fix the recent CI failure.
2023-08-13 05:57:14 -04:00
Luke Parker
8992504e69 Correct Dockerfile with typo 2023-08-13 05:11:49 -04:00
Luke Parker
e2901cab06 Revert round-advance on TendermintMachine::new if local clock is ahead of block start
It was improperly implemented, as it assumed rounds had a constant time
interval, which they do not. It also is against the spec and was meant to
absolve us of issues with poor performance when post-starting blockchains. The
new, and much more proper, workaround for the latter is a 120-second delay
between the Substrate time and the Tributary start time.
2023-08-13 04:35:46 -04:00
Luke Parker
13a8b0afc1 Add panic-handlers which exit on any panic
By default, tokio-spawned worker panics will only kill the task, not the
program. Due to our extensive use of panicking on invariants, we should ensure
the program exits.
2023-08-13 04:30:49 -04:00
Luke Parker
7e71450dc4 Bug fixes and log statements
Also shims next nonce code with a fine-for-now piece of code which is unviable
in production, yet should survive testnet.
2023-08-13 04:03:59 -04:00
Luke Parker
049fefb5fd Move apt calls in Docker to enable caching 2023-08-12 22:51:12 -04:00
Luke Parker
6a89cfcd08 Update to further-pruned substrate 2023-08-09 05:00:44 -04:00
Luke Parker
4571a8ad85 cargo update 2023-08-08 20:24:23 -04:00
Luke Parker
96db784b10 Increase the reproducible-runtime timeout as it was getting hit in CI 2023-08-08 18:36:31 -04:00
Luke Parker
dd523b22c2 Correct transcript minimum version requirements 2023-08-08 18:32:13 -04:00
Luke Parker
fa406c507f Update crypto/ package versions
On a branch while bitcoin-serai wraps up its audit.
2023-08-08 18:19:01 -04:00
Luke Parker
ad51c123e3 Fix Tendermint bug from prior commit 2023-08-08 16:33:09 -04:00
Luke Parker
f6f945e747 Add a LibP2P instantiation to coordinator
It's largely unoptimized, and not yet exclusive to validators, yet has basic
sanity (using message content for ID instead of sender + index).

Fixes bugs as found. Notably, we used a time in milliseconds where the
Tributary expected  seconds.

Also has Tributary::new jump to the presumed round number. This reduces slashes
when starting new chains (whose times will be before the current time) and was
the only way I was able to observe successful confirmations given current
surrounding infrastructure.
2023-08-08 15:12:47 -04:00
Luke Parker
0dd8aed134 Expand cluster-sm/local testnet to 4 validators for BFT where f=1 2023-08-06 13:42:16 -04:00
Luke Parker
cee788eac3 Test the Coordinator emits KeyGen
Mainly just a test that the full stack is properly set up and we've hit basic
functioning for further testing.
2023-08-06 12:38:44 -04:00
Luke Parker
bebe2fae0e cargo update due to substrate changes, again 2023-08-06 10:11:44 -04:00
Luke Parker
adb48cd737 Use 1.71.1 for reproducible runtime
1.71.1 fixes a CVE which wasn't at risk here yet still should be resolved.
2023-08-06 10:11:01 -04:00
Luke Parker
363a88c4ec Again update after latest series of substrate prunes 2023-08-06 08:04:56 -04:00
Luke Parker
5ba1dd2524 cargo update 2023-08-06 02:36:16 -04:00
Luke Parker
784c7b47a4 Add Immunefi to README 2023-08-04 12:28:15 -04:00
Luke Parker
376b36974f Stub binaries' code when features binaries is not set
Allows running `cargo build` in monero-serai and message-queue without
erroring, since it'd automatically try to build the binaries which require
additional features.

While we could make those features not optional, it'd increase time to build
and disk space required, which is why the features exist for monero-serai and
message-queue in the first place (since both are frequently used as libs).
2023-08-02 14:43:49 -04:00
Luke Parker
38ad1d4bc4 Add msrv definitions to common and crypto
This will effectively add msrv protections to the entire project as almost
everything grabs from these.

Doesn't add msrv to coins as coins/bitcoin is still frozen.

Doesn't add msrv to services since cargo msrv doesn't play nice with anything
importing the runtime.
2023-08-02 14:17:57 -04:00
Luke Parker
aab8a417db Have the Coordinator scan the Substrate genesis block
Also adds a workflow for running tests/coordinator.
2023-08-02 12:18:50 -04:00
Luke Parker
d5c787fea2 Add initial coordinator e2e tests 2023-08-01 19:00:48 -04:00
Luke Parker
e3a70ef0dc tests/processor clippy 2023-08-01 05:33:08 -04:00
Luke Parker
044b299cda cargo +nightly fmt (again) 2023-08-01 02:51:58 -04:00
Luke Parker
53d86e2a29 Latest clippy 2023-08-01 02:49:31 -04:00
Luke Parker
c338b92067 Update to latest nightly Rust 2023-08-01 00:48:11 -04:00
Luke Parker
3c38a0ec11 cargo +nightly fmt 2023-08-01 00:47:36 -04:00
Luke Parker
88f88b574c Sleep for up to a minute when creating a Coordinator if the network RPC has yet to boot
We've had a pair of CI failures due to calling add_block and being unable to
form an RPC connection. This attempts to fix this
2023-07-31 00:13:58 -04:00
Luke Parker
9f143a9742 Replace "coin" with "network"
The Processor's coins folder referred to the networks it could process, as did
its Coin trait. This, and other similar cases throughout the codebase, have now
been corrected.

Also corrects dated documentation for a key pair is confirmed under the
validator-sets pallet.
2023-07-30 16:11:30 -04:00
Luke Parker
2815046b21 Save the scheduler to disk
This is a horrible impl which does a full ser of everything on every change.
It's just the minimal changes to resolve this TODO and able testnet deployment.
2023-07-30 14:16:33 -04:00
Luke Parker
d8033504c8 Restore ports over expose so docker compose can be used for tests
Reduces security when considering a production deployment.
2023-07-30 13:37:10 -04:00
Luke Parker
48ad5fe20b Add icon as a svg 2023-07-30 12:51:17 -04:00
Luke Parker
4c801df4f2 Don't run apps in Docker as root 2023-07-30 08:04:36 -04:00
Luke Parker
9b79c4dc0c Test Eventuality completion via CoordinatorMessage::Completed 2023-07-30 07:00:54 -04:00
Luke Parker
e010d66c5d Only emit SignTranaction once
Due to the ordered message-queue, there's no benefit to multiple emissions as
there's no risk a completion will be missed. If it has yet to be read, sending
another which only be read after isn't helpful.

Simplifies code a decent bit.
2023-07-30 06:44:55 -04:00
Luke Parker
3d91fd88a3 Drastically increase placeholder fee for Monero
Needed for Docker tests.
2023-07-29 20:29:45 -04:00
Luke Parker
f988c43f8d Extend send_test with TX signing
Monero fails with fee_too_low, which this commit is meant to document.
2023-07-29 08:34:08 -04:00
Luke Parker
857e3ea72b cargo update 2023-07-29 07:11:53 -04:00
Luke Parker
8b14bb54bb Correct the fee amortization algorithm for an edge case
This is technically over-agressive, as a dropped output will reduce the fee,
yet this edge case is so minor the flow for it to not be over-aggressive (over
a few fractions of a cent) is by no means worth it.

Fixes the crash causable by the WIP send_test.
2023-07-29 07:05:55 -04:00
Luke Parker
f78332453b Start work on a send_test
Stops work where it does to the processor panickinng for Monero, yet not
Bitcoin, under what's present.

Cleans up processor tests to consolidate shared code.
2023-07-29 04:26:24 -04:00
Luke Parker
22da7aedde Implement a pretty Debug for various objects 2023-07-29 04:12:10 -04:00
Luke Parker
2641b83b3e Use resolver 2 2023-07-27 23:42:24 -04:00
Luke Parker
4980e6b704 cargo update 2023-07-27 23:38:04 -04:00
Luke Parker
c091b86919 Correct reproducible-runtime deny definition 2023-07-27 22:25:05 -04:00
Luke Parker
a8c7bb96c8 Add a crate to test the runtime can be reproducibly built 2023-07-27 21:42:26 -04:00
Luke Parker
09a95c9bd2 Rename deploy to orchestration
Also updates README to note prior unnoted folders.
2023-07-27 03:19:35 -04:00
Luke Parker
101da0a641 Use a BatchVerifier in reserialize_chain 2023-07-27 03:05:39 -04:00
Luke Parker
6d5851a9ee Use lz4 instead of zstd for the DB
zstd was recommended for the base layer only, due to its CPU requirements. That
was a misreading on mhy behalf.

lz4 gets ~5% better compression than snappy with ~30% faster performance. zstd
does ~25% better than lz4 yet at ~30% of the performance.
2023-07-26 14:05:10 -04:00
Luke Parker
64c309f8db Test Batches with Instructions 2023-07-26 14:02:17 -04:00
Luke Parker
e00aa3031c Have Shorthand::Raw contain RefundableInInstruction, not an encoded RII 2023-07-26 14:00:30 -04:00
Luke Parker
f8afb040dc Remove ApplicationCall
We can simply inline `Dex` into the InInstruction enum.
2023-07-26 12:45:51 -04:00
Luke Parker
7823ece4fe Test multiple batches, re-attempts, randomized selected signers 2023-07-26 05:55:47 -04:00
Luke Parker
b205391b28 Add libssl-dev to message-queue build packages 2023-07-26 04:33:36 -04:00
Luke Parker
32a937ddb9 Move the Monero Dockerfile to alpine 2023-07-26 04:22:41 -04:00
Luke Parker
bdbeedc723 Add pkg-config to Dockerfiles 2023-07-26 03:57:13 -04:00
Steven Chang
f306618e84 docs/protocol/Staking.md: delete 2023-07-26 03:46:55 -04:00
Luke Parker
89865b549c Update Serai node Dockerfile 2023-07-26 03:45:30 -04:00
Luke Parker
39eae2795f Update Dockerfiles to bookworm, successfully
Removes use of the Parity CI image. Always uses slim variants.
2023-07-26 03:06:01 -04:00
Luke Parker
0eb56406a4 Further dependency minimization for build times 2023-07-26 03:03:44 -04:00
Luke Parker
afb385fba4 Use spin's Once for OnceLock 2023-07-26 02:59:24 -04:00
Luke Parker
821f5d8de4 Restore create_if_missing to RocksDB code 2023-07-25 23:00:10 -04:00
Luke Parker
3862731a12 Minimize features pulled in to try and reduce build times 2023-07-25 22:29:39 -04:00
Luke Parker
42eb674d1a Print docker build 2023-07-25 22:18:20 -04:00
Luke Parker
1af5f1bcdc Revert from bookworm to bullseye
Parity's CI uses bullseye and bullseye has an incompatible, dynamically linked
openssl.
2023-07-25 22:15:33 -04:00
Luke Parker
32435d8a4c Consolidate RockDB code
Moves explicitly to zstd. RockDB recommends zstd, or at least lz4 over snappy,
and this minimizes which dependencies we pull in.
2023-07-25 21:43:27 -04:00
Luke Parker
49ce792b91 Move from bullseye-slim to bookworm-slim
Also moves from bullseye in the processor to bullseye-slim. This requires
adding back the apt intall, yet the tests/docker cache should handle it.

Minimizes processor image surface, hopefully also shrinks the CI down a bit.
2023-07-25 21:10:28 -04:00
Luke Parker
4949793c3f Clear docker cache after building in CI
We're at the CI storage limits, so hopefully this helps.
2023-07-25 21:09:40 -04:00
Luke Parker
61d46dccd4 Rename scan_test to batch_test 2023-07-25 18:10:05 -04:00
Luke Parker
88a1fce15c Test the processor's batch signing
Updates message-queue ot try recv every second, not 5.
2023-07-25 18:09:23 -04:00
Luke Parker
a2493cfafc Sub-CoordinatorMessage -> CoordinatorMessage via From/Into 2023-07-25 17:33:05 -04:00
Luke Parker
e3de64d5ff Check the processors picked up the received input 2023-07-24 22:11:58 -04:00
Luke Parker
ecd0457d5b clippy fixes 2023-07-24 21:49:51 -04:00
Luke Parker
7990ee689a Send to a processor from a test
Mainly here to build out the infra. Does not automate checking
recipience/batch creation yet.
2023-07-24 20:06:05 -04:00
Luke Parker
f05e909d0e Fix which key is used to index substrate_signers on ScannerEvent::Block
First notably bug found by docker tests.
2023-07-24 19:38:31 -04:00
Luke Parker
5e565fa3ef Correct when the Processor starts using the first key
It waited for CONFIRMATIONS + 1 confirmations, instead of CONFIRMATIONS
confirmations.

Also adds a lib interface to access the coin traits and its constants.
2023-07-24 15:36:35 -04:00
Luke Parker
6df1b46313 Don't use dbg for printing stdout/stderr
They are byte buffers, not strings. A pretty print has been added accordingly.
2023-07-24 15:35:43 -04:00
Luke Parker
24dba66bad cargo update 2023-07-24 04:54:13 -04:00
Luke Parker
fd585d496c Resolve #321 2023-07-24 04:53:59 -04:00
Luke Parker
5703591eb2 Extend critria to run Docker tests
The unit tests should be sufficient for these cases, making this exraneous, yet
better to be complete than at risk.
2023-07-24 02:56:58 -04:00
Luke Parker
9ac3b203c8 Fix panic causable by remote node 2023-07-24 02:53:54 -04:00
Luke Parker
23e1c9769c dalek 4.0 2023-07-23 14:32:14 -04:00
Luke Parker
8e6e05ae2d Set better logging defaults for Docker tests 2023-07-23 10:13:11 -04:00
Luke Parker
713660c79c Make key_gen a gadget, add ConfirmKeyPair 2023-07-22 05:10:40 -04:00
Luke Parker
cb8c8031b0 Correct retrieval of LastTagTime when the Docker image doesn't already exist 2023-07-22 04:37:45 -04:00
Luke Parker
d07447fe97 Implement an (almost) full Key Gen test for processor's Docker tests
It doesn't confirm the key pair yet.

Adds the infra neded to test processors against each other.
2023-07-22 04:06:44 -04:00
Luke Parker
c26beae0f9 Only rebuild Docker images when their source has been modified 2023-07-22 02:40:14 -04:00
Luke Parker
818215b570 Correct Dockerfile caching 2023-07-22 01:12:39 -04:00
Luke Parker
79943c3a6c MessageQueue::new 2023-07-22 01:12:15 -04:00
Luke Parker
076a8e4d62 Add the requirement for a debug Serai node to running tests documentation 2023-07-21 15:11:22 -04:00
Luke Parker
ffd1457927 Correct message-queue Dockerfile
It worked with my cache yet not without cache.
2023-07-21 15:08:37 -04:00
Luke Parker
523a055b74 Add processor Docker tests
Adds tests/docker for code common to Docker-based tests.
2023-07-21 14:08:42 -04:00
Luke Parker
624fb2781d Update how RPCs are handled
The processor now takes three vars and joins them itself. message-queue uses a
single argument, with defaults, as it's a service we control.
2023-07-21 14:01:42 -04:00
Luke Parker
641077a089 Use non-slim variants to remove needing to apt additional packages
Prevents needing to rebuild all the time via allowing cache hits.
2023-07-21 13:58:54 -04:00
Luke Parker
298d1fd3ba Slim coins and processor Dockerfiles 2023-07-21 04:27:14 -04:00
Luke Parker
92c3403698 Slim down the message-queue Dockerfile
While I tried Alpine, RocksDB won't build unless it's statically linked. Then
async-trait (and proc-macro's in general) won't compile when statically linked.
2023-07-21 04:07:32 -04:00
Luke Parker
37af8b51b3 Fallback to pgrep if pidof is unavailable 2023-07-21 03:37:48 -04:00
Luke Parker
900298b94b CI tweaks 2023-07-20 19:34:10 -04:00
Luke Parker
9effd5ccdc Add a Docker-based test for the message-queue service 2023-07-20 18:53:11 -04:00
Luke Parker
ceeb57470f Print when ConnectionErrors occur in reserialize_chain 2023-07-20 18:53:11 -04:00
Steven Chang
69454fa9bb .rustmfmt.toml: add edition 2023-07-20 15:28:03 -04:00
Luke Parker
5121ca7519 Handle the minimum relay fee 2023-07-20 01:20:28 -04:00
Luke Parker
1eb3b364f4 Correct dust constant 2023-07-20 00:29:31 -04:00
Luke Parker
f66fe3c1cb 3.10 Remove use of Network::Bitcoin
All uses were safe due to addresses being converted to script_pubkeys which
don't embed their network. The only risk of there being an issue is if a
future address spec did embed the net ID into the script_pubkey and that was
moved to.

This resolves the audit note and does offer that tightening.
2023-07-20 00:27:56 -04:00
Luke Parker
6f9d02fdf8 3.11 Better document API expectations 2023-07-19 23:51:21 -04:00
Luke Parker
28613400b8 message-queue and processor Dockerfiles 2023-07-19 20:13:16 -04:00
Luke Parker
b6579d5a2a Update Dockerfiles
Cleans files, updates Bitcoin and Monero versions, fixes Monero immediately
exiting.
2023-07-19 19:22:49 -04:00
Luke Parker
c2f32e7882 Update to FROST v14 2023-07-19 15:48:34 -04:00
Justin Berman
228e36a12d monero-serai: fee calculation parity with Monero's wallet2 (#301)
* monero-serai: fee calculation parity with Monero's wallet2

* Minor lint

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-07-19 15:06:05 -04:00
Luke Parker
38bcedbf11 Cargo.lock for prior commit 2023-07-19 00:58:46 -04:00
Luke Parker
98f9fc2c2f Pin to serde 1.0.167 due to https://github.com/monero-rs/monero-rs/issues/162 2023-07-19 00:53:03 -04:00
Luke Parker
d5cfb3fb25 Corrections from renaming Index to Nonce 2023-07-18 23:52:37 -04:00
Luke Parker
b1dbe1f50d hashbrown 0.14 in validator-sets 2023-07-18 23:25:15 -04:00
Luke Parker
1b57d655ed Update to subxt 0.29 2023-07-18 23:01:51 -04:00
Luke Parker
64402914ba Update substrate 2023-07-18 22:30:55 -04:00
Luke Parker
a7c9c1ef55 Integrate coordinator with MessageQueue and RocksDB
Also resolves a couple TODOs.
2023-07-18 01:53:51 -04:00
Luke Parker
a05961974a Lint a couple TODOs in the processor 2023-07-17 18:11:21 -04:00
Luke Parker
acc9495429 Use MessageQueue instead of MemCoordinator in processor
Also has it use RocksDB.
2023-07-17 18:02:29 -04:00
Luke Parker
344ac9cbfc Add ack signatures
Also modifies message signatures to be binding to from, not just from's key.
2023-07-17 17:40:34 -04:00
Luke Parker
6ccac2d0ab Add a message-queue connection to processor
Still needs love, yet should get us closer to starting testing.
2023-07-17 15:49:17 -04:00
Luke Parker
56f7037084 Correct get_o_indexes to work for 0-output TXs 2023-07-17 12:57:17 -04:00
Luke Parker
a0f8214d48 Resolve clippy 2023-07-17 10:53:47 -04:00
Luke Parker
5f93140ba5 Have reserialize_chain automatically retry on ConnectionError
Fixes expectations of formatting by expect as well.
2023-07-17 03:14:49 -04:00
Luke Parker
712f11d879 Correct the message-queue's handling of var 2023-07-17 02:01:31 -04:00
Luke Parker
808a633e4d Split CI to reduce tests run
common/ is now only run when common is edited. crypto/ when common/ or crypto/.
coins/ when common/ or crypto/ or coins/. The rest of the tests are run
whenever any package is edited (as they're all inter-connected).
2023-07-17 01:06:56 -04:00
Luke Parker
0a367bfbda Add common crate to access env variables
In the future, we should use a proper secret store (not just env variables).
This lets us update one block of code and not n in the future.
2023-07-17 00:53:05 -04:00
Luke Parker
845c2842b5 message-queue RocksDB + fix listening 2023-07-17 00:22:55 -04:00
Luke Parker
8543487db2 Document message-queue RPC methods 2023-07-16 20:53:58 -04:00
Luke Parker
a9072e6b1b Remove signature from get_next_message
Duee to signature replaying, it's very annoying to provide meanigful data
access privacy. None of these messages should be private/have sensitive data
anyways though.
2023-07-16 20:38:13 -04:00
Luke Parker
b0c28a1cf0 Move message_queue over to deduplication via intents
Due to each service having multiple distinct clocks, we can't expect a stable
ordering except the ordering an intact message-queue provides. The messages
emitted should be consistent however, solely with unknown order, which is why
we can craft intents based on their contents (already implemented by
processor-messages).
2023-07-16 20:34:35 -04:00
akildemir
23b9d57305 add polyseed support (#257)
* add polyseed support

* fix pr comments

* fix tests

* Embed the mempool into the Blockchain

* Plan scheduled payments whenever outputs are received

The scheduler prior waited for the next series of payments to be added.

* Replace Tendermint step with sync_block

Step moved a step forward after an externally synced/added block. This created
a race condition to add the block between the sync process and the Tendermint
machine. Now that the block routes through Tendermint, there is no such race
condition.

* Finish binding Tendermint into Tributary and define a Tributary master object

* Add correction the last commit missed

* Add DoS limits to tributary and require provided transactions be ordered

* Fix the scheduler from dropping UTXOs when there weren't any payments

* Documentation and cargo update

* Add a dedicated db crate with a basic DB trait

It's needed by the processor and tributary (coordinator).

* Add a DB to Tributary

Adds support for reloading most of the blockchain.

* Reloaded provided transactions from the disk

Also resolves a race condition by asserting provided transactions must be
unique, allowing them to be safely provided multiple times.

* must_use annotations on DbTxn

* Support reloading the mempool from disk

* Add a NewSet event to validator-sets

Updates to the latest serai-dex/substrate due to depending on
10ccaca0eb498a2316bbf627d419b29b1a75933a.

* Add basic getters to tributary

* cargo update

* Update to the latest subxt

Writes a custom unsigned extrinic creator due to subxt having an internal error
with the scale metadata. While the code in our scope increased, it's much more
ergonomic to our usage. We may end up rewriting most of subxt, eventually.

* Make unsigned private due to unsafe calling potential

* Start defining the coordinator

* Merge AckBlock with Burns

Offers greater efficiency while reducing concerns re: atomicity.

* Correct processor flow to have the coordinator decide signing set/re-attempts

The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.

Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.

* cargo +nightly fmt

* cargo update

Since p256 now pulls in an extra crate with this update, the {k,p}256 imports
disable default-features to prevent growing the tree.

* Support extracting timestamps from blocks

* Make progres on handling NewSet events

Further bones out the coordinator.

* Resolve #245

* Have InInstructions track the latest block for a network in storage

* Fill out code for the rest of the Substrate events

* Clean up the Substrate block processing code

* Rename transaction file to tributary, add function for genesis

* Add a processor API to the coordinator

* Add extensive commentary on mutable to the processor's main file

Clearly establishes why consistency is guaranteed from a Rust borrow-checker
mindset. While there are plenty of... 'violations', they're clearly explained.

Hopefully, this method of thinking helps promote/ensure consistency in the
future.

* Move ConfirmKeyPair from key_gen to substrate

Clarifies the emitter and accordingly why its mutations are justified.

* Remove BatchSigned

SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.

* Add note to processor_messages

* Use a single txn for an entire coordinator message

Removes direct DB accesses whre possible. Documents the safety of the rest.
Does uncover one case of unsafety not previously noted.

* cargo update to remove usage of yanked crate

* Clarify safety of Scanner::block_number and KeyGen::keys

* Tweak ConfirmKeyPair to alleviate database requirements of coordinator

* Use an enum for Coin/NetworkId

It originally wasn't an enum so software which had yet to update before an
integration wouldn't error (as now enums are strictly typed). The strict typing
is preferable though.

* Code a method to determine the activation block before any block has consensus

[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.

This doesn't use an Option for ergonomic reasons.

* automate whitespace & trimming test cases

* Save keys by their tweaked group_key

Keys are referred to by their tweaked versions. If a tweak was needed, keys
would fail to confirm.

* Use crypto-bigint's reduction in ed448

Achieves feasible performance in the ed448 which makes it potentially viable
for real world usage.

Accordingly prepares a new release, updating the README.

* Move the entirety of ed448 to Residue, offering a further 2-4x speedup

* Resolve #68

Notably speeds up monero-serai's build and CLSAG performance.

* Make MainDB into SubstrateDB

* Initial Tributary handling

* Add additional checks to key_gen/sign

There is the ability to cause state bloat by flooding Tributary.
KeyGen/Sign specifically shouldn't allow bloat since we check the
commitments/preprocesses/shares for validity. Accordingly, any invalid data
(such as bloat) should be detected.

It was posssible to place bloat after the valid data. Doing so would be
considered a valid KeyGen/Sign message, yet could add up to 50k kB per sign.

* Apply DKG TX handling code to all sign TXs

The existing code was almost entirely applicable. It just needed to be scoped
with an ID. While the handle function is now a bit convoluted, I don't see a
better option.

* Split FinalizedBlock into ExternalBlock and SeraiBlock

Also re-arranges their orders.

* Add support for multiple orderings in Provided

Necessary as our Tributary chains needed to agree when a Serai block has
occurred, and when a Monero block has occurred. Since those could happen at the
same time, some validators may put SeraiBlock before ExternalBlock and vice
versa, causing a chain halt. Now they can have distinct ordering queues.

* Slash on unrecognized ID

* ExternalBlock handler

* Add a SubstrateBlockAck message to the processor

When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.

This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.

Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.

This creates an explicitly proper async flow not reliant on waiting for data
availability.

Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.

* Route the SubstrateBlock message, which is the last Tributary transaction type

* Add recent bloat checks added to signer to substrate_signer as well

* Add no_std support to transcript, dalek-ff-group, ed448, ciphersuite, multiexp, schnorr, and monero-generators

transcript, dalek-ff-group, ed449, and ciphersuite are all usable with no_std
alone. The rest additionally require alloc.

Part of #279.

* Add a test to the coordinator for running a Tributary

Impls a LocalP2p for testing.

Moves rebroadcasting into Tendermint, since it's what knows if a message is
fully valid + original.

Removes TributarySpec::validators() HashMap, as its non-determinism caused
different instances to have different round robin schedules. It was already
prior moved to a Vec for this issue, so I'm unsure why this remnant existed.

Also renames the GH no-std workflow from the prior commit.

* Add a test for Tributary

Further fleshes out the Tributary testing code.

* Test handling of DKG commitments transactions

* Add Transaction::sign.

While I don't love the introduction of empty_signed, it's practically fine.

* Tributary test wait_for_tx_inclusion function

* Additionally test DKGShares

* Handle adding new Tributaries

Removes last_block as an argument from Tendermint. It now loads from the DB as
needed. While slightly less performant, it's easiest and should be fine.

* Reload Tributaries

add_active_tributary writes the spec to disk before it returns, so even if the
VecDeque it pushes to isn't popped, the tributary will still be loaded on boot.

* Start handling P2P messages

This defines the tart of a very complex series of locks I'm really unhappy
with. At the same time, there's not immediately a better solution. This also
should work without issue.

* Clarify Arc RwLocks and sleeps in coordinator

* Send a heartbeat message when a Tributary falls behind

* cargo fmt

* cargo update

* Move json word lists to rs

Allows building the seed code without serde_json.

* Break coordinator main into multiple functions

Also moves from std::sync::RwLock to tokio::sync::RwLock to prevent wasting
cycles on spinning.

* Remove reliance on a blockchain read lock from block/commit

* Implement Tributary syncing

Also adds a forwards-lookup to the Tributary blockchain.

* Don't return from sync_block until the Tendermint machine returns if it's valid or not

We had a race condition where'd we be informed of blocks 1 .. 3, and
immediately add 1 .. 3. Because we immediately tried to add 2 after 1, it'd
fail since the tip was still the genesis, yet 2 needs the tip to be 1.

Adding a channel, while ugly, was the simplest way to accomplish this.

Also has any added block be broadcasted. Else there's a race condition where a
node which syncs up to the most recent block does so, yet fails to add the next
block when it's committed to.

* Test handle_p2p and Tributary syncing

Includes bug fixes.

* Tweak tests workflow

* Add a TributaryReader which doesn't require a borrow to operate

Reduces lock contention.

Additionally changes block_key to include the genesis. While not technically
needed, the lack of genesis introduced a side effect where any Tributary on the
the database could return the block of any other Tributary. While that wasn't a
security issue, returning it suggested it was on-chain when it wasn't. This may
have been usable to create issues.

* Document panic in FROST

* Document a pair of panics requiring 256 GB of RAM/4 GB of a context

* Add a UID function to messages

When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.

Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.

* Initial code to handle messages from processors

* Document the processor/tributary/coordinator/serai flow

* Have Coordinator MainDb take a mutable borrow

* Update to substrate polkadot-v0.9.42

* Correct error message in ff-group-tests

* Update to May's nightly

Doesn't use the PR due to the needed changes.

* Support arbitrary RPC providers in monero-serai

Sets a clean path for no-std premised RPCs (buffers to an external RPC impl)/
Tor-based RPCs/client-side load balancing/...

* Correct processor's handling of the new Monero RPC code

* Correct Serai Dockerfile

* Publish ExternablBlock/SubstrateBlock, delay *Preprocess until ID acknowledged

Adds a channel for the Tributary scanner to communicate when an ID has been
acknowledged.

* Rename uid to intent

* Use U448 for Ed448 instead of U512

* Spawn a new async task for each block message

This probably should be done with n-long lived tasks, one per Tributary. While
this may not be suitably performant long-term (potential DoS vector), this at
least resolves the halting concerns.

* Move the coordinator to a n-processor design

* Ensure Tributary commits are minimal

* Properly get genesis for a Processor message

* Create a vote transaction upon GeneratedKeyPair

* Remove TODO about code de-duplication

It's infeasible to write a macro/function there. Does add a type alias which
makes things cleaner.

* Have coordinator publish batches to Substrate

* Implement MuSig key aggregation into DKG

Isn't spec compliant due to the lack of a spec to be compliant too.

Slight deviation from the paper by using a unique list instead of a multiset.

Closes #186, progresses #277.

* Correct 2/3rds definitions throughout the codebase

The prior formula failed for some values, such as 20.
20 / 3 = 6, * 2 = 12, + 1 = 13. 13 is 65%, not >= 67.

* cargo update

Resolves a yanked crate and removes some duplicated dependencies.

* Add a dedicated function to get a MuSig key

* Do the minimal amount of work for dkg to compile under no-std

The Substrate runtime requires access to the MuSig key aggregation function.

\#279 related.

* Use a MuSig signature to publish validator set key pairs to Serai

The processor/coordinator flow still has to be rewritten.

* Correct various no_std definitions

* Add a context to MuSig key aggregation

* Use proper messages for ValidatorSets/InInstructions pallet

Provides a DST, and associated metadata as beneficial.

Also utilizes MuSig's context to session-bind. Since set_keys_messages also
binds to set, this is semi-redundant, yet that's appreciated.

* Remove signed Substrate TXs from Coordinator

* Only scan v2 Monero TXs

* Fix for prior commit

* Ensure canonical points in the cross-group DLEq proof

* Fix incorrect sig_hash generation

sig_hash was used as a challenge. challenges should be of the form H(R, A, m).
These sig hashes were solely H(A, m), allowing trivial forgeries.

* cargo update

Resolves an openssl advisory and nets ~-8 crates.

* Build no-std tests with RISC-V 32 IMAC

Turns out wasm still has std, making it suboptimal to use here.

* Pin setup-protoc to v2.0.0

* Update to substrate polkadot-v0.9.43

* fix tributary sync test

* Slight terminology correction in sync test

Also correct a mistake from merging the most recent polkadot version.

* Update nightly

* Replace lazy_static with OnceLock inside monero-serai

lazy_static, if no_std environments were used, effectively required always
using spin locks. This resolves the ergonomics of that while adopting Rust std
code.

no_std does still use a spin based solution. Theoretically, we could use
atomics, yet writing our own Mutex wasn't a priority.

* no-std support for monero-serai (#311)

* Move monero-serai from std to std-shims, where possible

* no-std fixes

* Make the HttpRpc its own feature, thiserror only on std

* Drop monero-rs's epee for a homegrown one

We only need it for a single function. While I tried jeffro's, it didn't work
out of the box, had three unimplemented!s, and is no where near viable for
no_std.

Fixes #182, though should be further tested.

* no-std monero-serai

* Allow base58-monero via git

* cargo fmt

* Represent RCT amounts with None, not 0.

Fixes #282.

Does allow any v1 TXs which exist, and v2 miner-TXs, to specify Some(0). As far
as I can tell, both were/are theoreitcally possible.

* Add a message queue

This is intended to be a reliable transport between the processors and
coordinator. Since it'll be intranet only, it's written as never fail.

Primarily needs testing and a proper ID.

* cargo update

Resolves https://github.com/serai-dex/serai/security/dependabot/29

* Correct deny.toml with inclusion of message-queue

* Update nightly

* std-shims: six `Read` for &[u8]

* Use serai- prefixes on Serai-specific packages

Fixes deny.toml, also runs a minor cargo update shrinking the tree.

* Update monero-tests workflow to new name for the processor

* Correct depends for processor-messages

* Disable Rust caching

We hit the cache limit after just one or two builds, making it infeasible.

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

* Monero: support for legacy transactions (#308)

* add mlsag

* fix last commit

* fix miner v1 txs

* fix non-miner v1 txs

* add borromean + fix mlsag

* add block hash calculations

* fix for the jokester that added unreduced scalars

to the borromean signature of
2368d846e671bf79a1f84c6d3af9f0bfe296f043f50cf17ae5e485384a53707b

* Add Borromean range proof verifying functionality

* Add MLSAG verifying functionality

* fmt & clippy :)

* update MLSAG, ss2_elements will always be 2

* Add MgSig proving

* Tidy block.rs

* Tidy Borromean, fix bugs in last commit, replace todo! with unreachable!

* Mark legacy EcdhInfo amount decryption as experimental

* Correct comments

* Write a new impl of the merkle algorithm

This one tries to be understandable.

* Only pull in things only needed for experimental when experimental

* Stop caching the Monero block hash now in processor that we have Block::hash

* Corrections for recent processor commit

* Use a clearer algorithm for the merkle

Should also be more efficient due to not shifting as often.

* Tidy Mlsag

* Remove verify_rct_* from Mlsag

Both methods were ports from Monero, overtly specific without clear
documentation. They need to be added back in, with documentation, or included
in a node which provides the necessary further context for them to be naturally
understandable.

* Move mlsag/mod.rs to mlsag.rs

This should only be a folder if it has multiple files.

* Replace EcdhInfo terminology

The ECDH encrypted the amount, yet this struct contained the encrypted amount,
not some ECDH.

Also corrects the types on the original EcdhInfo struct.

* Correct handling of commitment masks when scanning

* Route read_array through read_raw_vec

* Misc lint

* Make a proper RctType enum

No longer caches RctType in the RctSignatures as well.

* Replace Vec<Bulletproofs> with Bulletproofs

Monero uses aggregated range proofs, so there's only ever one Bulletproof. This
is enforced with a consensus rule as well, making this safe.

As for why Monero uses a vec, it's probably due to the lack of variadic typing
used. Its effectively an Option for them, yet we don't need an Option since we
do have variadic typing (enums).

* Add necessary checks to Eventuality re: supported protocols

* Fix for block 202612 and fix merkel root calculations

* MLSAG (de)serialisation fix

ss_2_elements will not always be 2 as rct type 1 transactions are not enforced to have one input

* Revert "MLSAG (de)serialisation fix"

This reverts commit 5e710e0c96.

here it checks number of MGs == number of inputs:
0a1eaf26f9/src/cryptonote_core/tx_verification_utils.cpp (L60-59)

and here it checks for RctTypeFull number of MGs == 1:
0a1eaf26f9/src/ringct/rctSigs.cpp (L1325)

so number of inputs == 1
so ss_2_elements == 2

* update `MlsagAggregate` comment

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>

* Fix the known issue with the DSA

I wrote it to only select TXs with a timelock, not only TXs which are unlocked.
This most likely explains why it so heavily selected coinbases.

Also moves an InternalError which would've never been hit on mainnet, yet
technically isn't an invariant, to only exist when cfg(test).

* Add a bin to download a chain, over RPC, reserializing and hashing every item

Parallelized. Doesn't check the deserialization is correct. Does use distinct,
persistent HTTP clients.

* Correct how Monero integration tests are run

* Support multiple RPCs in the reserialize_chain bin

* Don't call get_height every block

* Modify get_transactions to split requests as to not hit the restricted RPC limits

* Meaningful changes from aggressive-clippy

I do want to enable a few specific lints, yet aggressive-clippy as a whole
isn't worthwhile.

* Extend reserialize_chain with CLSAG/BP(+) verification

* Remove spammy println from reserialize_chain

* Update reserialize_chain for v1 and migration TXs

Also always marks 0-amount inputs as RCT due to impossibility of non-RCT
0-amount outputs.

* Only deserialize RctSignatures where's there at least one input

This is only enforced by the Monero protocol due to a single check the mixRing
isn't empty in get_pre_mlsag_hash. The value in ensuring there's a least one
input is to ensure the safety of our rct_type functions, which determines the
RctType based off structural analysis (specifically, input data if
MlsagBorromean).

rct_type was technically safe without this. A 0-input transaction would be
mis-classified as RctFull/MlsagAggregate, which would then make the
RctSignatures invalid for being RctFull (requiring exactly one input) yet not
having inputs, meaning an invalid RctSignatures would be mis-classified yet
still invalid.

This just removes the risk of mis-classification in the first place, tightening
the library's safety.

* docs/Getting Started.md: cargo build --release --all-features

* Fix the known instance of #295

* Bind RocksDB into serai-db

* Split up tests in CI to avoid node storage limits

* Corrections to prior commit

* Again

I called git commit --amend without calling git add . again :(

* Update the flow for completed signing processes

Now, an on-chain transaction exists. This resolves some ambiguities and
provides greater coordination.

* Clean Polyseed code

* Final tweaks

* Correct no-std builds for Polyseed

* Again correct no-std

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Co-authored-by: GitHub Actions <unknown>
Co-authored-by: Boog900 <54e72d8a-345f-4599-bd90-c6b9bc7d0ec5@aleeas.com>
Co-authored-by: Boog900 <108027008+Boog900@users.noreply.github.com>
Co-authored-by: Steven Chang <stevenchang5000@gmail.com>
2023-07-16 07:25:17 -04:00
Luke Parker
807ec30762 Update the flow for completed signing processes
Now, an on-chain transaction exists. This resolves some ambiguities and
provides greater coordination.
2023-07-14 14:05:12 -04:00
Luke Parker
5424886d63 Again
I called git commit --amend without calling git add . again :(
2023-07-14 13:13:38 -04:00
Luke Parker
2bebe0755d Corrections to prior commit 2023-07-14 13:11:01 -04:00
Luke Parker
f0ce6e6388 Split up tests in CI to avoid node storage limits 2023-07-14 13:02:58 -04:00
Luke Parker
62504b2622 Bind RocksDB into serai-db 2023-07-13 19:09:11 -04:00
Luke Parker
c9bb284570 Fix the known instance of #295 2023-07-13 14:02:57 -04:00
Steven Chang
6a4bccba74 docs/Getting Started.md: cargo build --release --all-features 2023-07-13 13:04:51 -04:00
Luke Parker
df67b7d94c 3.13 Better document the offset mapping 2023-07-10 15:02:34 -04:00
Luke Parker
677b9b681f 3.9/3.10. 3.9: Remove cast which fails on a several GB malicious TX
3.10 has its impossibility documented. A malicious RPC cananot effect this code.
2023-07-10 14:44:18 -04:00
Luke Parker
fa1b569b78 3.8 Document termination of unbounded loop 2023-07-10 14:34:32 -04:00
Luke Parker
d75115ce13 3.7 Replace unwraps with expects
Doesn't replace unwraps on integer conversions.
2023-07-10 14:02:59 -04:00
Luke Parker
3d00d405a3 3.5 (handled with 3.1) 2023-07-10 13:35:47 -04:00
Luke Parker
3480fc5e16 3.4 2023-07-10 13:33:08 -04:00
Luke Parker
7fa5d291b8 Implement a more robust validity check on connection creation 2023-07-09 15:49:35 -04:00
Luke Parker
b54548b13a Only deserialize RctSignatures where's there at least one input
This is only enforced by the Monero protocol due to a single check the mixRing
isn't empty in get_pre_mlsag_hash. The value in ensuring there's a least one
input is to ensure the safety of our rct_type functions, which determines the
RctType based off structural analysis (specifically, input data if
MlsagBorromean).

rct_type was technically safe without this. A 0-input transaction would be
mis-classified as RctFull/MlsagAggregate, which would then make the
RctSignatures invalid for being RctFull (requiring exactly one input) yet not
having inputs, meaning an invalid RctSignatures would be mis-classified yet
still invalid.

This just removes the risk of mis-classification in the first place, tightening
the library's safety.
2023-07-09 00:44:23 -04:00
Luke Parker
c878d38c60 3.1 2023-07-08 21:48:18 -04:00
Luke Parker
5d9067b84d Update reserialize_chain for v1 and migration TXs
Also always marks 0-amount inputs as RCT due to impossibility of non-RCT
0-amount outputs.
2023-07-08 21:09:22 -04:00
Luke Parker
13f48a406e Remove spammy println from reserialize_chain 2023-07-08 20:30:45 -04:00
Luke Parker
35fcd11096 Extend reserialize_chain with CLSAG/BP(+) verification 2023-07-08 20:29:55 -04:00
Luke Parker
93b1656f86 Meaningful changes from aggressive-clippy
I do want to enable a few specific lints, yet aggressive-clippy as a whole
isn't worthwhile.
2023-07-08 11:29:07 -04:00
1047 changed files with 111086 additions and 32950 deletions

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: 24.0.1 default: "27.0"
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Bitcoin Daemon Cache - name: Bitcoin Daemon Cache
id: cache-bitcoind id: cache-bitcoind
uses: actions/cache@v3 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: bitcoin.tar.gz path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
@@ -37,11 +37,4 @@ runs:
- name: Bitcoin Regtest Daemon - name: Bitcoin Regtest Daemon
shell: bash shell: bash
run: | run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -txindex -daemon
RPC_USER=serai
RPC_PASS=seraidex
bitcoind -txindex -regtest \
-rpcuser=$RPC_USER -rpcpassword=$RPC_PASS \
-rpcbind=127.0.0.1 -rpcbind=$(hostname) -rpcallowip=0.0.0.0/0 \
-daemon

View File

@@ -1,43 +1,49 @@
name: build-dependencies name: build-dependencies
description: Installs build dependencies for Serai description: Installs build dependencies for Serai
inputs:
github-token:
description: "GitHub token to install Protobuf with"
require: true
default:
rust-toolchain:
description: "Rust toolchain to install"
required: false
default: stable
rust-components:
description: "Rust components to install"
required: false
default:
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Install Protobuf - name: Remove unused packages
uses: arduino/setup-protoc@v2.0.0 shell: bash
with: run: |
repo-token: ${{ inputs.github-token }} sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
sudo apt clean
docker system prune -a --volumes
if: runner.os == 'Linux'
- name: Remove unused packages
shell: bash
run: |
(gem uninstall -aIx) || (exit 0)
brew uninstall --force "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
brew uninstall --force "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
brew uninstall --force "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
brew uninstall --force "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
brew cleanup
if: runner.os == 'macOS'
- name: Install dependencies
shell: bash
run: |
if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler
elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf
fi
- name: Install solc - name: Install solc
shell: bash shell: bash
run: | run: |
pip3 install solc-select==0.2.1 cargo install svm-rs
solc-select install 0.8.16 svm install 0.8.26
solc-select use 0.8.16 svm use 0.8.26
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ inputs.rust-toolchain }}
components: ${{ inputs.rust-components }}
targets: wasm32-unknown-unknown, riscv32imac-unknown-none-elf
# - name: Cache Rust # - name: Cache Rust
# uses: Swatinem/rust-cache@v2 # uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.2.0 default: v0.18.3.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Wallet RPC Cache - name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc id: cache-monero-wallet-rpc
uses: actions/cache@v3 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: monero-wallet-rpc path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
@@ -41,4 +41,9 @@ runs:
- name: Monero Wallet RPC - name: Monero Wallet RPC
shell: bash shell: bash
run: ./monero-wallet-rpc --disable-rpc-login --rpc-bind-port 6061 --allow-mismatched-daemon-version --wallet-dir ./ --detach run: |
./monero-wallet-rpc --allow-mismatched-daemon-version \
--daemon-address 0.0.0.0:18081 --daemon-login serai:seraidex \
--disable-rpc-login --rpc-bind-port 18082 \
--wallet-dir ./ \
--detach

View File

@@ -5,16 +5,16 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.2.0 default: v0.18.3.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Daemon Cache - name: Monero Daemon Cache
id: cache-monerod id: cache-monerod
uses: actions/cache@v3 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: monerod path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Monero Daemon - name: Download the Monero Daemon
@@ -37,8 +37,10 @@ runs:
wget https://downloads.getmonero.org/cli/$FILE wget https://downloads.getmonero.org/cli/$FILE
tar -xvf $FILE tar -xvf $FILE
mv monero-x86_64-linux-gnu-${{ inputs.version }}/monerod monerod sudo mv monero-x86_64-linux-gnu-${{ inputs.version }}/monerod /usr/bin/monerod
sudo chmod 777 /usr/bin/monerod
sudo chmod +x /usr/bin/monerod
- name: Monero Regtest Daemon - name: Monero Regtest Daemon
shell: bash shell: bash
run: ./monerod --regtest --offline --fixed-difficulty=1 --detach run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/monero/run.sh --detach

View File

@@ -2,33 +2,27 @@ name: test-dependencies
description: Installs test dependencies for Serai description: Installs test dependencies for Serai
inputs: inputs:
github-token:
description: "GitHub token to install Protobuf with"
require: true
default:
monero-version: monero-version:
description: "Monero version to download and run as a regtest node" description: "Monero version to download and run as a regtest node"
required: false required: false
default: v0.18.2.0 default: v0.18.3.4
bitcoin-version: bitcoin-version:
description: "Bitcoin version to download and run as a regtest node" description: "Bitcoin version to download and run as a regtest node"
required: false required: false
default: 24.0.1 default: "27.1"
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Install Foundry - name: Install Foundry
uses: foundry-rs/foundry-toolchain@v1 uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
with: with:
version: nightly version: nightly-f625d0fa7c51e65b4bf1e8f7931cd1c6e2e285e9
cache: false
- name: Run a Monero Regtest Node - name: Run a Monero Regtest Node
uses: ./.github/actions/monero uses: ./.github/actions/monero

View File

@@ -1 +1 @@
nightly-2023-07-01 nightly-2024-07-01

34
.github/workflows/common-tests.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: common/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
pull_request:
paths:
- "common/**"
workflow_dispatch:
jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p std-shims \
-p zalloc \
-p patchable-async-sleep \
-p serai-db \
-p serai-env \
-p serai-task \
-p simple-request

40
.github/workflows/coordinator-tests.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Coordinator Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "message-queue/**"
- "coordinator/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/coordinator/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "message-queue/**"
- "coordinator/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/coordinator/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run coordinator Docker tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-coordinator-tests

44
.github/workflows/crypto-tests.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: crypto/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
workflow_dispatch:
jobs:
test-crypto:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p flexible-transcript \
-p ff-group-tests \
-p dalek-ff-group \
-p minimal-ed448 \
-p ciphersuite \
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p generalized-bulletproofs \
-p generalized-bulletproofs-circuit-abstraction \
-p ec-divisors \
-p generalized-bulletproofs-ec-gadgets \
-p dkg \
-p modular-frost \
-p frost-schnorrkel

View File

@@ -9,17 +9,14 @@ jobs:
name: Run cargo deny name: Run cargo deny
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache - name: Advisory Cache
uses: actions/cache@v3 uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with: with:
path: ~/.cargo/advisory-db path: ~/.cargo/advisory-db
key: rust-advisory-db key: rust-advisory-db
- name: Install cargo
uses: dtolnay/rust-toolchain@stable
- name: Install cargo deny - name: Install cargo deny
run: cargo install --locked cargo-deny run: cargo install --locked cargo-deny

22
.github/workflows/full-stack-tests.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Full Stack Tests
on:
push:
branches:
- develop
pull_request:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Full Stack Docker tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-full-stack-tests

114
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,114 @@
name: Lint
on:
push:
branches:
- develop
pull_request:
workflow_dispatch:
jobs:
clippy:
strategy:
matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
# Also verify the lockfile isn't dirty
# This happens when someone edits a Cargo.toml yet doesn't do anything
# which causes the lockfile to be updated
# The above clippy run will cause it to be updated, so checking there's
# no differences present now performs the desired check
- name: Verify lockfile
shell: bash
run: git diff | wc -l | LC_ALL="en_US.utf8" grep -x -e "^[ ]*0"
deny:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
- name: Run cargo deny
run: cargo deny -L error --all-features check
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -c rustfmt
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
with:
version: nightly-41d4e5437107f6f42c7711123890147bc736a609
cache: false
- name: Run forge fmt
run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol")
machete:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify all dependencies are in use
run: |
cargo install cargo-machete
cargo machete
slither:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Slither
run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol
cp networks/ethereum/schnorr/contracts/Schnorr.sol processor/ethereum/router/contracts/
cp processor/ethereum/erc20/contracts/IERC20.sol processor/ethereum/router/contracts/
cd processor/ethereum/router/contracts
slither Router.sol

View File

@@ -0,0 +1,36 @@
name: Message Queue Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "message-queue/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/message-queue/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "message-queue/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/message-queue/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run message-queue Docker tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-message-queue-tests

26
.github/workflows/mini-tests.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: mini/ Tests
on:
push:
branches:
- develop
paths:
- "mini/**"
pull_request:
paths:
- "mini/**"
workflow_dispatch:
jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p mini-serai

View File

@@ -5,28 +5,43 @@ on:
branches: branches:
- develop - develop
paths: paths:
- "coins/monero/**" - "networks/monero/**"
- "processor/**" - "processor/**"
pull_request: pull_request:
paths: paths:
- "coins/monero/**" - "networks/monero/**"
- "processor/**" - "processor/**"
workflow_dispatch:
jobs: jobs:
# Only run these once since they will be consistent regardless of any node # Only run these once since they will be consistent regardless of any node
unit-tests: unit-tests:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies - name: Test Dependencies
uses: ./.github/actions/test-dependencies uses: ./.github/actions/test-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run Unit Tests Without Features - name: Run Unit Tests Without Features
run: cargo test --package monero-serai --lib run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-io --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-generators --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-primitives --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-mlsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-clsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-borromean --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-bulletproofs --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-address --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-seed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package polyseed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --lib
# Doesn't run unit tests with features as the tests workflow will # Doesn't run unit tests with features as the tests workflow will
@@ -35,25 +50,28 @@ jobs:
# Test against all supported protocol versions # Test against all supported protocol versions
strategy: strategy:
matrix: matrix:
version: [v0.17.3.2, v0.18.2.0] version: [v0.17.3.2, v0.18.3.4]
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies - name: Test Dependencies
uses: ./.github/actions/test-dependencies uses: ./.github/actions/test-dependencies
with: with:
github-token: ${{ secrets.GITHUB_TOKEN }}
monero-version: ${{ matrix.version }} monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features - name: Run Integration Tests Without Features
# Runs with the binaries feature so the binaries build run: |
# https://github.com/rust-lang/cargo/issues/8396 GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --test '*'
run: cargo test --package monero-serai --features binaries --test '*' GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --test '*'
- name: Run Integration Tests - name: Run Integration Tests
# Don't run if the the tests workflow also will # Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.2.0' }} if: ${{ matrix.version != 'v0.18.3.4' }}
run: | run: |
cargo test --package monero-serai --all-features --test '*' GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'
cargo test --package serai-processor --all-features monero GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --all-features --test '*'

View File

@@ -9,7 +9,7 @@ jobs:
name: Update nightly name: Update nightly
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
with: with:
submodules: "recursive" submodules: "recursive"
@@ -28,7 +28,7 @@ jobs:
git push -u origin $(date +"nightly-%Y-%m") git push -u origin $(date +"nightly-%Y-%m")
- name: Pull Request - name: Pull Request
uses: actions/github-script@v6 uses: actions/github-script@d7906e4ad0b1822421a7e6a35d5ca353c962f410
with: with:
script: | script: |
const { repo, owner } = context.repo; const { repo, owner } = context.repo;

259
.github/workflows/msrv.yml vendored Normal file
View File

@@ -0,0 +1,259 @@
name: Weekly MSRV Check
on:
schedule:
- cron: "0 0 * * 0"
workflow_dispatch:
jobs:
msrv-common:
name: Run cargo msrv on common
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on common
run: |
cargo msrv verify --manifest-path common/zalloc/Cargo.toml
cargo msrv verify --manifest-path common/std-shims/Cargo.toml
cargo msrv verify --manifest-path common/env/Cargo.toml
cargo msrv verify --manifest-path common/db/Cargo.toml
cargo msrv verify --manifest-path common/task/Cargo.toml
cargo msrv verify --manifest-path common/request/Cargo.toml
cargo msrv verify --manifest-path common/patchable-async-sleep/Cargo.toml
msrv-crypto:
name: Run cargo msrv on crypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on crypto
run: |
cargo msrv verify --manifest-path crypto/transcript/Cargo.toml
cargo msrv verify --manifest-path crypto/ff-group-tests/Cargo.toml
cargo msrv verify --manifest-path crypto/dalek-ff-group/Cargo.toml
cargo msrv verify --manifest-path crypto/ed448/Cargo.toml
cargo msrv verify --manifest-path crypto/multiexp/Cargo.toml
cargo msrv verify --manifest-path crypto/dleq/Cargo.toml
cargo msrv verify --manifest-path crypto/ciphersuite/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorr/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/generalized-bulletproofs/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/circuit-abstraction/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/divisors/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/ec-gadgets/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/embedwards25519/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/secq256k1/Cargo.toml
cargo msrv verify --manifest-path crypto/dkg/Cargo.toml
cargo msrv verify --manifest-path crypto/frost/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorrkel/Cargo.toml
msrv-networks:
name: Run cargo msrv on networks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on networks
run: |
cargo msrv verify --manifest-path networks/bitcoin/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/build-contracts/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/schnorr/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/alloy-simple-request-transport/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/relayer/Cargo.toml --features parity-db
cargo msrv verify --manifest-path networks/monero/io/Cargo.toml
cargo msrv verify --manifest-path networks/monero/generators/Cargo.toml
cargo msrv verify --manifest-path networks/monero/primitives/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/mlsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/clsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/borromean/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/bulletproofs/Cargo.toml
cargo msrv verify --manifest-path networks/monero/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/simple-request/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/address/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/Cargo.toml
cargo msrv verify --manifest-path networks/monero/verify-chain/Cargo.toml
msrv-message-queue:
name: Run cargo msrv on message-queue
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path message-queue/Cargo.toml --features parity-db
msrv-processor:
name: Run cargo msrv on processor
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on processor
run: |
cargo msrv verify --manifest-path processor/view-keys/Cargo.toml
cargo msrv verify --manifest-path processor/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/messages/Cargo.toml
cargo msrv verify --manifest-path processor/scanner/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/smart-contract/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/standard/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/transaction-chaining/Cargo.toml
cargo msrv verify --manifest-path processor/key-gen/Cargo.toml
cargo msrv verify --manifest-path processor/frost-attempt-manager/Cargo.toml
cargo msrv verify --manifest-path processor/signers/Cargo.toml
cargo msrv verify --manifest-path processor/bin/Cargo.toml --features parity-db
cargo msrv verify --manifest-path processor/bitcoin/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/test-primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/erc20/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/deployer/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/router/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/Cargo.toml
cargo msrv verify --manifest-path processor/monero/Cargo.toml
msrv-coordinator:
name: Run cargo msrv on coordinator
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on coordinator
run: |
cargo msrv verify --manifest-path coordinator/tributary-sdk/tendermint/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary-sdk/Cargo.toml
cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml
cargo msrv verify --manifest-path coordinator/substrate/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/libp2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/Cargo.toml
msrv-substrate:
name: Run cargo msrv on substrate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on substrate
run: |
cargo msrv verify --manifest-path substrate/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/dex/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/economic-security/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/abi/Cargo.toml
cargo msrv verify --manifest-path substrate/client/Cargo.toml
cargo msrv verify --manifest-path substrate/runtime/Cargo.toml
cargo msrv verify --manifest-path substrate/node/Cargo.toml
msrv-orchestration:
name: Run cargo msrv on orchestration
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path orchestration/Cargo.toml
msrv-mini:
name: Run cargo msrv on mini
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on mini
run: |
cargo msrv verify --manifest-path mini/Cargo.toml

52
.github/workflows/networks-tests.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: networks/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "networks/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "networks/**"
workflow_dispatch:
jobs:
test-networks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \
-p build-solidity-contracts \
-p ethereum-schnorr-contract \
-p alloy-simple-request-transport \
-p serai-ethereum-relayer \
-p monero-io \
-p monero-generators \
-p monero-primitives \
-p monero-mlsag \
-p monero-clsag \
-p monero-borromean \
-p monero-bulletproofs \
-p monero-serai \
-p monero-rpc \
-p monero-simple-request-rpc \
-p monero-address \
-p monero-wallet \
-p monero-seed \
-p polyseed \
-p monero-wallet-util \
-p monero-serai-verify-chain

View File

@@ -4,18 +4,32 @@ on:
push: push:
branches: branches:
- develop - develop
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "tests/no-std/**"
pull_request: pull_request:
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "tests/no-std/**"
workflow_dispatch:
jobs: jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }} - name: Install RISC-V Toolchain
run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf
- name: Verify no-std builds - name: Verify no-std builds
run: cd tests/no-std && cargo build --target riscv32imac-unknown-none-elf run: CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf -p serai-no-std-tests

90
.github/workflows/pages.yml vendored Normal file
View File

@@ -0,0 +1,90 @@
# MIT License
#
# Copyright (c) 2022 just-the-docs
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll site to Pages
on:
push:
branches:
- "develop"
paths:
- "docs/**"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
jobs:
# Build job
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
bundler-cache: true
cache-version: 0
working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages
id: pages
uses: actions/configure-pages@v3
- name: Build with Jekyll
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
JEKYLL_ENV: production
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: "docs/_site/"
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2

40
.github/workflows/processor-tests.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Processor Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/processor/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/processor/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run processor Docker tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-processor-tests

View File

@@ -0,0 +1,36 @@
name: Reproducible Runtime
on:
push:
branches:
- develop
paths:
- "Cargo.lock"
- "common/**"
- "crypto/**"
- "substrate/**"
- "orchestration/runtime/**"
- "tests/reproducible-runtime/**"
pull_request:
paths:
- "Cargo.lock"
- "common/**"
- "crypto/**"
- "substrate/**"
- "orchestration/runtime/**"
- "tests/reproducible-runtime/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Reproducible Runtime tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests

View File

@@ -4,82 +4,109 @@ on:
push: push:
branches: branches:
- develop - develop
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
- "substrate/**"
pull_request: pull_request:
paths:
- "common/**"
- "crypto/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
- "substrate/**"
workflow_dispatch: workflow_dispatch:
jobs: jobs:
clippy: test-infra:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
rust-toolchain: ${{ steps.nightly.outputs.version }}
rust-components: clippy
- name: Run Clippy
# Allow dbg_macro when run locally, yet not when pushed
run: cargo clippy --all-features --all-targets -- -D clippy::dbg_macro $(grep "\S" ../../clippy-config | grep -v "#")
deny:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Advisory Cache
uses: actions/cache@v3
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo
uses: dtolnay/rust-toolchain@stable
- name: Install cargo deny
run: cargo install --locked cargo-deny
- name: Run cargo deny
run: cargo deny -L error --all-features check
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Build node
run: |
cd substrate/node
cargo build
- name: Run Tests - name: Run Tests
run: GITHUB_CI=true cargo test --all-features run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-message-queue \
-p serai-processor-messages \
-p serai-processor-key-gen \
-p serai-processor-view-keys \
-p serai-processor-frost-attempt-manager \
-p serai-processor-primitives \
-p serai-processor-scanner \
-p serai-processor-scheduler-primitives \
-p serai-processor-utxo-scheduler-primitives \
-p serai-processor-utxo-scheduler \
-p serai-processor-transaction-chaining-scheduler \
-p serai-processor-smart-contract-scheduler \
-p serai-processor-signers \
-p serai-processor-bin \
-p serai-bitcoin-processor \
-p serai-processor-ethereum-primitives \
-p serai-processor-ethereum-test-primitives \
-p serai-processor-ethereum-deployer \
-p serai-processor-ethereum-router \
-p serai-processor-ethereum-erc20 \
-p serai-ethereum-processor \
-p serai-monero-processor \
-p tendermint-machine \
-p tributary-sdk \
-p serai-cosign \
-p serai-coordinator-substrate \
-p serai-coordinator-tributary \
-p serai-coordinator-p2p \
-p serai-coordinator-libp2p-p2p \
-p serai-coordinator \
-p serai-orchestrator \
-p serai-docker-tests
fmt: test-substrate:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v3 - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use - name: Build Dependencies
id: nightly uses: ./.github/actions/build-dependencies
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install rustfmt - name: Run Tests
uses: dtolnay/rust-toolchain@master run: |
with: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
toolchain: ${{ steps.nightly.outputs.version }} -p serai-primitives \
components: rustfmt -p serai-coins-primitives \
-p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \
-p serai-emissions-primitives \
-p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-primitives \
-p serai-signals-pallet \
-p serai-abi \
-p serai-runtime \
-p serai-node
- name: Run rustfmt test-serai-client:
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

5
.gitignore vendored
View File

@@ -1,2 +1,7 @@
target target
Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile
.test-logs
.vscode .vscode

View File

@@ -1,3 +1,4 @@
edition = "2021"
tab_spaces = 2 tab_spaces = 2
max_width = 100 max_width = 100

9893
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,8 +1,27 @@
[workspace] [workspace]
resolver = "2"
members = [ members = [
# Version patches
"patches/parking_lot_core",
"patches/parking_lot",
"patches/zstd",
"patches/rocksdb",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
"common/std-shims", "common/std-shims",
"common/zalloc", "common/zalloc",
"common/patchable-async-sleep",
"common/db", "common/db",
"common/env",
"common/task",
"common/request",
"crypto/transcript", "crypto/transcript",
@@ -12,61 +31,234 @@ members = [
"crypto/ciphersuite", "crypto/ciphersuite",
"crypto/multiexp", "crypto/multiexp",
"crypto/schnorr", "crypto/schnorr",
"crypto/dleq", "crypto/dleq",
"crypto/evrf/secq256k1",
"crypto/evrf/embedwards25519",
"crypto/evrf/generalized-bulletproofs",
"crypto/evrf/circuit-abstraction",
"crypto/evrf/divisors",
"crypto/evrf/ec-gadgets",
"crypto/dkg", "crypto/dkg",
"crypto/frost", "crypto/frost",
"crypto/schnorrkel", "crypto/schnorrkel",
"coins/ethereum", "networks/bitcoin",
"coins/monero/generators",
"coins/monero", "networks/ethereum/build-contracts",
"networks/ethereum/schnorr",
"networks/ethereum/alloy-simple-request-transport",
"networks/ethereum/relayer",
"networks/monero/io",
"networks/monero/generators",
"networks/monero/primitives",
"networks/monero/ringct/mlsag",
"networks/monero/ringct/clsag",
"networks/monero/ringct/borromean",
"networks/monero/ringct/bulletproofs",
"networks/monero",
"networks/monero/rpc",
"networks/monero/rpc/simple-request",
"networks/monero/wallet/address",
"networks/monero/wallet",
"networks/monero/wallet/seed",
"networks/monero/wallet/polyseed",
"networks/monero/wallet/util",
"networks/monero/verify-chain",
"message-queue", "message-queue",
"processor/messages", "processor/messages",
"processor",
"coordinator/tributary/tendermint", "processor/key-gen",
"processor/view-keys",
"processor/frost-attempt-manager",
"processor/primitives",
"processor/scanner",
"processor/scheduler/primitives",
"processor/scheduler/utxo/primitives",
"processor/scheduler/utxo/standard",
"processor/scheduler/utxo/transaction-chaining",
"processor/scheduler/smart-contract",
"processor/signers",
"processor/bin",
"processor/bitcoin",
"processor/ethereum/primitives",
"processor/ethereum/test-primitives",
"processor/ethereum/deployer",
"processor/ethereum/router",
"processor/ethereum/erc20",
"processor/ethereum",
"processor/monero",
"coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk",
"coordinator/cosign",
"coordinator/substrate",
"coordinator/tributary", "coordinator/tributary",
"coordinator/p2p",
"coordinator/p2p/libp2p",
"coordinator", "coordinator",
"substrate/primitives", "substrate/primitives",
"substrate/tokens/primitives", "substrate/coins/primitives",
"substrate/tokens/pallet", "substrate/coins/pallet",
"substrate/dex/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives", "substrate/in-instructions/primitives",
"substrate/in-instructions/pallet", "substrate/in-instructions/pallet",
"substrate/validator-sets/primitives", "substrate/signals/primitives",
"substrate/validator-sets/pallet", "substrate/signals/pallet",
"substrate/abi",
"substrate/runtime", "substrate/runtime",
"substrate/node", "substrate/node",
"substrate/client", "substrate/client",
"orchestration",
"mini",
"tests/no-std", "tests/no-std",
"tests/docker",
"tests/message-queue",
"tests/processor",
"tests/coordinator",
"tests/full-stack",
"tests/reproducible-runtime",
] ]
# Always compile Monero (and a variety of dependencies) with optimizations due # Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs # to the extensive operations required for Bulletproofs
[profile.dev.package] [profile.dev.package]
subtle = { opt-level = 3 } subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
ff = { opt-level = 3 } ff = { opt-level = 3 }
group = { opt-level = 3 } group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 } crypto-bigint = { opt-level = 3 }
secp256k1 = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 } dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 } minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 } multiexp = { opt-level = 3 }
monero-serai = { opt-level = 3 } secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
ec-divisors = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
dkg = { opt-level = 3 }
monero-generators = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
[profile.release] [profile.release]
panic = "unwind" panic = "unwind"
[patch.crates-io]
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
parking_lot_core = { path = "patches/parking_lot_core" }
parking_lot = { path = "patches/parking_lot" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
matches = { path = "patches/matches" }
# directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained
# The directories author pulls in ridiculously pointless crates and prefers
# copyleft licenses
# The following two patches resolve everything
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
# The official pasta_curves repo doesn't support Zeroize
pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" }
[workspace.lints.clippy]
unwrap_or_default = "allow"
map_unwrap_or = "allow"
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"
cast_possible_wrap = "deny"
cast_precision_loss = "deny"
cast_ptr_alignment = "deny"
cast_sign_loss = "deny"
checked_conversions = "deny"
cloned_instead_of_copied = "deny"
enum_glob_use = "deny"
expl_impl_clone_on_copy = "deny"
explicit_into_iter_loop = "deny"
explicit_iter_loop = "deny"
flat_map_option = "deny"
float_cmp = "deny"
fn_params_excessive_bools = "deny"
ignored_unit_patterns = "deny"
implicit_clone = "deny"
inefficient_to_string = "deny"
invalid_upcast_comparisons = "deny"
large_stack_arrays = "deny"
linkedlist = "deny"
macro_use_imports = "deny"
manual_instant_elapsed = "deny"
manual_let_else = "deny"
manual_ok_or = "deny"
manual_string_new = "deny"
match_bool = "deny"
match_same_arms = "deny"
missing_fields_in_debug = "deny"
needless_continue = "deny"
needless_pass_by_value = "deny"
ptr_cast_constness = "deny"
range_minus_one = "deny"
range_plus_one = "deny"
redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
string_slice = "deny"
unchecked_duration_subtraction = "deny"
uninlined_format_args = "deny"
unnecessary_box_returns = "deny"
unnecessary_join = "deny"
unnecessary_wraps = "deny"
unnested_or_patterns = "deny"
unused_async = "deny"
unused_self = "deny"
zero_sized_map_values = "deny"

View File

@@ -5,26 +5,32 @@ Bitcoin, Ethereum, DAI, and Monero, offering a liquidity-pool-based trading
experience. Funds are stored in an economically secured threshold-multisig experience. Funds are stored in an economically secured threshold-multisig
wallet. wallet.
[Getting Started](docs/Getting%20Started.md) [Getting Started](spec/Getting%20Started.md)
### Layout ### Layout
- `audits`: Audits for various parts of Serai. - `audits`: Audits for various parts of Serai.
- `docs`: Documentation on the Serai protocol. - `spec`: The specification of the Serai protocol, both internally and as
networked.
- `docs`: User-facing documentation on the Serai protocol.
- `common`: Crates containing utilities common to a variety of areas under - `common`: Crates containing utilities common to a variety of areas under
Serai, none neatly fitting under another category. Serai, none neatly fitting under another category.
- `crypto`: A series of composable cryptographic libraries built around the - `crypto`: A series of composable cryptographic libraries built around the
`ff`/`group` APIs achieving a variety of tasks. These range from generic `ff`/`group` APIs, achieving a variety of tasks. These range from generic
infrastructure, to our IETF-compliant FROST implementation, to a DLEq proof as infrastructure, to our IETF-compliant FROST implementation, to a DLEq proof as
needed for Bitcoin-Monero atomic swaps. needed for Bitcoin-Monero atomic swaps.
- `coins`: Various coin libraries intended for usage in Serai yet also by the - `networks`: Various libraries intended for usage in Serai yet also by the
wider community. This means they will always support the functionality Serai wider community. This means they will always support the functionality Serai
needs, yet won't disadvantage other use cases when possible. needs, yet won't disadvantage other use cases when possible.
- `message-queue`: An ordered message server so services can talk to each other,
even when the other is offline.
- `processor`: A generic chain processor to process data for Serai and process - `processor`: A generic chain processor to process data for Serai and process
events from Serai, executing transactions as expected and needed. events from Serai, executing transactions as expected and needed.
@@ -33,12 +39,28 @@ wallet.
- `substrate`: Substrate crates used to instantiate the Serai network. - `substrate`: Substrate crates used to instantiate the Serai network.
- `deploy`: Scripts to deploy a Serai node/test environment. - `orchestration`: Dockerfiles and scripts to deploy a Serai node/test
environment.
- `tests`: Tests for various crates. Generally, `crate/src/tests` is used, or
`crate/tests`, yet any tests requiring crates' binaries are placed here.
### Security
Serai hosts a bug bounty program via
[Immunefi](https://immunefi.com/bounty/serai/). For in-scope critical
vulnerabilities, we will reward whitehats with up to $30,000.
Anything not in-scope should still be submitted through Immunefi, with rewards
issued at the discretion of the Immunefi program managers.
### Links ### Links
- [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX - [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai - [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz - [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): - [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
https://matrix.to/#/#serai:matrix.org - [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/
- [Telegram](https://t.me/SeraiDEX): https://t.me/SeraiDEX

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2023 Luke Parker Copyright (c) 2023 Cypher Stack
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,7 @@
# Cypher Stack /networks/bitcoin Audit, August 2023
This audit was over the `/networks/bitcoin` folder (at the time located at
`/coins/bitcoin`). It is encompassing up to commit
5121ca75199dff7bd34230880a1fdd793012068c.
Please see https://github.com/cypherstack/serai-btc-audit for provenance.

View File

@@ -1,51 +0,0 @@
# No warnings allowed
-D warnings
# nursery
-D clippy::nursery
# Erratic and unhelpful
-A clippy::missing_const_for_fn
# Too many false/irrelevant positives
-A clippy::redundant_pub_crate
# Flags on any debug_assert using an RNG
-A clippy::debug_assert_with_mut_call
# Stylistic preference
-A clippy::option_if_let_else
# pedantic
-D clippy::unnecessary_wraps
-D clippy::unused_async
-D clippy::unused_self
# restrictions
# Safety
-D clippy::as_conversions
-D clippy::disallowed_script_idents
-D clippy::wildcard_enum_match_arm
# Clarity
-D clippy::assertions_on_result_states
-D clippy::deref_by_slicing
-D clippy::empty_structs_with_brackets
-D clippy::get_unwrap
-D clippy::rest_pat_in_fully_bound_structs
-D clippy::semicolon_inside_block
-D clippy::tests_outside_test_module
# Quality
-D clippy::format_push_string
-D clippy::string_to_string
# These potentially should be enabled in the future
# -D clippy::missing_errors_doc
# -D clippy::missing_panics_doc
# -D clippy::doc_markdown
# TODO: Enable this
# -D clippy::cargo
# Not in nightly yet
# -D clippy::redundant_type_annotations
# -D clippy::big_endian_bytes
# -D clippy::host_endian_bytes

View File

@@ -1,37 +0,0 @@
[package]
name = "bitcoin-serai"
version = "0.2.0"
description = "A Bitcoin library for FROST-signing transactions"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/bitcoin"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Vrx <vrx00@proton.me>"]
edition = "2021"
[dependencies]
lazy_static = "1"
thiserror = "1"
zeroize = "^1.5"
rand_core = "0.6"
sha2 = "0.10"
secp256k1 = { version = "0.27", features = ["global-context"] }
bitcoin = { version = "0.30", features = ["serde"] }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic", "bits"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", features = ["recommended"] }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["secp256k1"] }
hex = "0.4"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.11", features = ["json"] }
[dev-dependencies]
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["tests"] }
tokio = { version = "1", features = ["full"] }
[features]
hazmat = []

View File

@@ -1,160 +0,0 @@
use core::fmt::Debug;
use std::io;
use lazy_static::lazy_static;
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use sha2::{Digest, Sha256};
use transcript::Transcript;
use secp256k1::schnorr::Signature;
use k256::{
elliptic_curve::{
ops::Reduce,
sec1::{Tag, ToEncodedPoint},
},
U256, Scalar, ProjectivePoint,
};
use frost::{
curve::{Ciphersuite, Secp256k1},
Participant, ThresholdKeys, ThresholdView, FrostError,
algorithm::{Hram as HramTrait, Algorithm, Schnorr as FrostSchnorr},
};
use bitcoin::key::XOnlyPublicKey;
/// Get the x coordinate of a non-infinity, even point. Panics on invalid input.
pub fn x(key: &ProjectivePoint) -> [u8; 32] {
let encoded = key.to_encoded_point(true);
assert_eq!(encoded.tag(), Tag::CompressedEvenY, "x coordinate of odd key");
(*encoded.x().expect("point at infinity")).into()
}
/// Convert a non-infinite even point to a XOnlyPublicKey. Panics on invalid input.
pub fn x_only(key: &ProjectivePoint) -> XOnlyPublicKey {
XOnlyPublicKey::from_slice(&x(key)).unwrap()
}
/// Make a point even by adding the generator until it is even. Returns the even point and the
/// amount of additions required.
pub fn make_even(mut key: ProjectivePoint) -> (ProjectivePoint, u64) {
let mut c = 0;
while key.to_encoded_point(true).tag() == Tag::CompressedOddY {
key += ProjectivePoint::GENERATOR;
c += 1;
}
(key, c)
}
/// A BIP-340 compatible HRAm for use with the modular-frost Schnorr Algorithm.
///
/// If passed an odd nonce, it will have the generator added until it is even.
#[derive(Clone, Copy, Debug)]
pub struct Hram {}
lazy_static! {
static ref TAG_HASH: [u8; 32] = Sha256::digest(b"BIP0340/challenge").into();
}
#[allow(non_snake_case)]
impl HramTrait<Secp256k1> for Hram {
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
// Convert the nonce to be even
let (R, _) = make_even(*R);
let mut data = Sha256::new();
data.update(*TAG_HASH);
data.update(*TAG_HASH);
data.update(x(&R));
data.update(x(A));
data.update(m);
Scalar::reduce(U256::from_be_slice(&data.finalize()))
}
}
/// BIP-340 Schnorr signature algorithm.
///
/// This must be used with a ThresholdKeys whose group key is even. If it is odd, this will panic.
#[derive(Clone)]
pub struct Schnorr<T: Sync + Clone + Debug + Transcript>(FrostSchnorr<Secp256k1, T, Hram>);
impl<T: Sync + Clone + Debug + Transcript> Schnorr<T> {
/// Construct a Schnorr algorithm continuing the specified transcript.
pub fn new(transcript: T) -> Schnorr<T> {
Schnorr(FrostSchnorr::new(transcript))
}
}
impl<T: Sync + Clone + Debug + Transcript> Algorithm<Secp256k1> for Schnorr<T> {
type Transcript = T;
type Addendum = ();
type Signature = Signature;
fn transcript(&mut self) -> &mut Self::Transcript {
self.0.transcript()
}
fn nonces(&self) -> Vec<Vec<ProjectivePoint>> {
self.0.nonces()
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Secp256k1>,
) {
self.0.preprocess_addendum(rng, keys)
}
fn read_addendum<R: io::Read>(&self, reader: &mut R) -> io::Result<Self::Addendum> {
self.0.read_addendum(reader)
}
fn process_addendum(
&mut self,
view: &ThresholdView<Secp256k1>,
i: Participant,
addendum: (),
) -> Result<(), FrostError> {
self.0.process_addendum(view, i, addendum)
}
fn sign_share(
&mut self,
params: &ThresholdView<Secp256k1>,
nonce_sums: &[Vec<<Secp256k1 as Ciphersuite>::G>],
nonces: Vec<Zeroizing<<Secp256k1 as Ciphersuite>::F>>,
msg: &[u8],
) -> <Secp256k1 as Ciphersuite>::F {
self.0.sign_share(params, nonce_sums, nonces, msg)
}
#[must_use]
fn verify(
&self,
group_key: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
sum: Scalar,
) -> Option<Self::Signature> {
self.0.verify(group_key, nonces, sum).map(|mut sig| {
// Make the R of the final signature even
let offset;
(sig.R, offset) = make_even(sig.R);
// s = r + cx. Since we added to the r, add to s
sig.s += Scalar::from(offset);
// Convert to a secp256k1 signature
Signature::from_slice(&sig.serialize()[1 ..]).unwrap()
})
}
fn verify_share(
&self,
verification_share: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
share: Scalar,
) -> Result<Vec<(Scalar, ProjectivePoint)>, ()> {
self.0.verify_share(verification_share, nonces, share)
}
}

View File

@@ -1,145 +0,0 @@
use core::fmt::Debug;
use thiserror::Error;
use serde::{Deserialize, de::DeserializeOwned};
use serde_json::json;
use bitcoin::{
hashes::{Hash, hex::FromHex},
consensus::encode,
Txid, Transaction, BlockHash, Block,
};
#[derive(Clone, PartialEq, Eq, Debug, Deserialize)]
pub struct Error {
code: isize,
message: String,
}
#[derive(Clone, Debug, Deserialize)]
#[serde(untagged)]
enum RpcResponse<T> {
Ok { result: T },
Err { error: Error },
}
/// A minimal asynchronous Bitcoin RPC client.
#[derive(Clone, Debug)]
pub struct Rpc(String);
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum RpcError {
#[error("couldn't connect to node")]
ConnectionError,
#[error("request had an error: {0:?}")]
RequestError(Error),
#[error("node sent an invalid response")]
InvalidResponse,
}
impl Rpc {
pub async fn new(url: String) -> Result<Rpc, RpcError> {
let rpc = Rpc(url);
// Make an RPC request to verify the node is reachable and sane
rpc.get_latest_block_number().await?;
Ok(rpc)
}
/// Perform an arbitrary RPC call.
pub async fn rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: serde_json::Value,
) -> Result<Response, RpcError> {
let client = reqwest::Client::new();
let res = client
.post(&self.0)
.json(&json!({ "jsonrpc": "2.0", "method": method, "params": params }))
.send()
.await
.map_err(|_| RpcError::ConnectionError)?
.text()
.await
.map_err(|_| RpcError::ConnectionError)?;
let res: RpcResponse<Response> =
serde_json::from_str(&res).map_err(|_| RpcError::InvalidResponse)?;
match res {
RpcResponse::Ok { result } => Ok(result),
RpcResponse::Err { error } => Err(RpcError::RequestError(error)),
}
}
/// Get the latest block's number.
///
/// The genesis block's 'number' is zero. They increment from there.
pub async fn get_latest_block_number(&self) -> Result<usize, RpcError> {
// getblockcount doesn't return the amount of blocks on the current chain, yet the "height"
// of the current chain. The "height" of the current chain is defined as the "height" of the
// tip block of the current chain. The "height" of a block is defined as the amount of blocks
// present when the block was created. Accordingly, the genesis block has height 0, and
// getblockcount will return 0 when it's only the only block, despite their being one block.
self.rpc_call("getblockcount", json!([])).await
}
/// Get the hash of a block by the block's number.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
let mut hash = *self
.rpc_call::<BlockHash>("getblockhash", json!([number]))
.await?
.as_raw_hash()
.as_byte_array();
// bitcoin stores the inner bytes in reverse order.
hash.reverse();
Ok(hash)
}
/// Get a block's number by its hash.
pub async fn get_block_number(&self, hash: &[u8; 32]) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct Number {
height: usize,
}
Ok(self.rpc_call::<Number>("getblockheader", json!([hex::encode(hash)])).await?.height)
}
/// Get a block by its hash.
pub async fn get_block(&self, hash: &[u8; 32]) -> Result<Block, RpcError> {
let hex = self.rpc_call::<String>("getblock", json!([hex::encode(hash), 0])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex).map_err(|_| RpcError::InvalidResponse)?;
let block: Block = encode::deserialize(&bytes).map_err(|_| RpcError::InvalidResponse)?;
let mut block_hash = *block.block_hash().as_raw_hash().as_byte_array();
block_hash.reverse();
if hash != &block_hash {
Err(RpcError::InvalidResponse)?;
}
Ok(block)
}
/// Publish a transaction.
pub async fn send_raw_transaction(&self, tx: &Transaction) -> Result<Txid, RpcError> {
let txid = self.rpc_call("sendrawtransaction", json!([encode::serialize_hex(tx)])).await?;
if txid != tx.txid() {
Err(RpcError::InvalidResponse)?;
}
Ok(txid)
}
/// Get a transaction by its hash.
pub async fn get_transaction(&self, hash: &[u8; 32]) -> Result<Transaction, RpcError> {
let hex = self.rpc_call::<String>("getrawtransaction", json!([hex::encode(hash)])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex).map_err(|_| RpcError::InvalidResponse)?;
let tx: Transaction = encode::deserialize(&bytes).map_err(|_| RpcError::InvalidResponse)?;
let mut tx_hash = *tx.txid().as_raw_hash().as_byte_array();
tx_hash.reverse();
if hash != &tx_hash {
Err(RpcError::InvalidResponse)?;
}
Ok(tx)
}
}

View File

@@ -1,160 +0,0 @@
use std::{
io::{self, Read, Write},
collections::HashMap,
};
use k256::{
elliptic_curve::sec1::{Tag, ToEncodedPoint},
Scalar, ProjectivePoint,
};
use frost::{
curve::{Ciphersuite, Secp256k1},
ThresholdKeys,
};
use bitcoin::{
consensus::encode::{Decodable, serialize},
key::TweakedPublicKey,
OutPoint, ScriptBuf, TxOut, Transaction, Block, Network, Address,
};
use crate::crypto::{x_only, make_even};
mod send;
pub use send::*;
/// Tweak keys to ensure they're usable with Bitcoin.
pub fn tweak_keys(keys: &ThresholdKeys<Secp256k1>) -> ThresholdKeys<Secp256k1> {
let (_, offset) = make_even(keys.group_key());
keys.offset(Scalar::from(offset))
}
/// Return the Taproot address for a public key.
pub fn address(network: Network, key: ProjectivePoint) -> Option<Address> {
if key.to_encoded_point(true).tag() != Tag::CompressedEvenY {
return None;
}
Some(Address::p2tr_tweaked(TweakedPublicKey::dangerous_assume_tweaked(x_only(&key)), network))
}
/// A spendable output.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct ReceivedOutput {
// The scalar offset to obtain the key usable to spend this output.
offset: Scalar,
// The output to spend.
output: TxOut,
// The TX ID and vout of the output to spend.
outpoint: OutPoint,
}
impl ReceivedOutput {
/// The offset for this output.
pub fn offset(&self) -> Scalar {
self.offset
}
/// The outpoint for this output.
pub fn outpoint(&self) -> &OutPoint {
&self.outpoint
}
/// The value of this output.
pub fn value(&self) -> u64 {
self.output.value
}
/// Read a ReceivedOutput from a generic satisfying Read.
pub fn read<R: Read>(r: &mut R) -> io::Result<ReceivedOutput> {
Ok(ReceivedOutput {
offset: Secp256k1::read_F(r)?,
output: TxOut::consensus_decode(r)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid TxOut"))?,
outpoint: OutPoint::consensus_decode(r)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid OutPoint"))?,
})
}
/// Write a ReceivedOutput to a generic satisfying Write.
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.offset.to_bytes())?;
w.write_all(&serialize(&self.output))?;
w.write_all(&serialize(&self.outpoint))
}
/// Serialize a ReceivedOutput to a Vec<u8>.
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
}
/// A transaction scanner capable of being used with HDKD schemes.
#[derive(Clone, Debug)]
pub struct Scanner {
key: ProjectivePoint,
scripts: HashMap<ScriptBuf, Scalar>,
}
impl Scanner {
/// Construct a Scanner for a key.
///
/// Returns None if this key can't be scanned for.
pub fn new(key: ProjectivePoint) -> Option<Scanner> {
let mut scripts = HashMap::new();
// Uses Network::Bitcoin since network is irrelevant here
scripts.insert(address(Network::Bitcoin, key)?.script_pubkey(), Scalar::ZERO);
Some(Scanner { key, scripts })
}
/// Register an offset to scan for.
///
/// Due to Bitcoin's requirement that points are even, not every offset may be used.
/// If an offset isn't usable, it will be incremented until it is. If this offset is already
/// present, None is returned. Else, Some(offset) will be, with the used offset.
pub fn register_offset(&mut self, mut offset: Scalar) -> Option<Scalar> {
loop {
match address(Network::Bitcoin, self.key + (ProjectivePoint::GENERATOR * offset)) {
Some(address) => {
let script = address.script_pubkey();
if self.scripts.contains_key(&script) {
None?;
}
self.scripts.insert(script, offset);
return Some(offset);
}
None => offset += Scalar::ONE,
}
}
}
/// Scan a transaction.
pub fn scan_transaction(&self, tx: &Transaction) -> Vec<ReceivedOutput> {
let mut res = vec![];
for (vout, output) in tx.output.iter().enumerate() {
if let Some(offset) = self.scripts.get(&output.script_pubkey) {
res.push(ReceivedOutput {
offset: *offset,
output: output.clone(),
outpoint: OutPoint::new(tx.txid(), u32::try_from(vout).unwrap()),
});
}
}
res
}
/// Scan a block.
///
/// This will also scan the coinbase transaction which is bound by maturity. If received outputs
/// must be immediately spendable, a post-processing pass is needed to remove those outputs.
/// Alternatively, scan_transaction can be called on `block.txdata[1 ..]`.
pub fn scan_block(&self, block: &Block) -> Vec<ReceivedOutput> {
let mut res = vec![];
for tx in &block.txdata {
res.extend(self.scan_transaction(tx));
}
res
}
}

View File

@@ -1,3 +0,0 @@
# solidity build outputs
cache
artifacts

View File

@@ -1,37 +0,0 @@
[package]
name = "ethereum-serai"
version = "0.1.0"
description = "An Ethereum library supporting Schnorr signing and on-chain verification"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/ethereum"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Elizabeth Binks <elizabethjbinks@gmail.com>"]
edition = "2021"
publish = false
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
thiserror = "1"
rand_core = "0.6"
serde_json = "1"
serde = "1"
sha2 = "0.10"
sha3 = "0.10"
group = "0.13"
k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic", "bits", "ecdsa"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["secp256k1", "tests"] }
eyre = "0.6"
ethers = { version = "2", default-features = false, features = ["abigen", "ethers-solc"] }
[build-dependencies]
ethers-solc = "2"
[dev-dependencies]
tokio = { version = "1", features = ["macros"] }

View File

@@ -1,9 +0,0 @@
# Ethereum
This package contains Ethereum-related functionality, specifically deploying and
interacting with Serai contracts.
### Dependencies
- solc
- [Foundry](https://github.com/foundry-rs/foundry)

View File

@@ -1,16 +0,0 @@
use ethers_solc::{Project, ProjectPathsConfig};
fn main() {
println!("cargo:rerun-if-changed=contracts");
println!("cargo:rerun-if-changed=artifacts");
// configure the project with all its paths, solc, cache etc.
let project = Project::builder()
.paths(ProjectPathsConfig::hardhat(env!("CARGO_MANIFEST_DIR")).unwrap())
.build()
.unwrap();
project.compile().unwrap();
// Tell Cargo that if a source file changes, to rerun this build script.
project.rerun_if_sources_changed();
}

View File

@@ -1,36 +0,0 @@
//SPDX-License-Identifier: AGPLv3
pragma solidity ^0.8.0;
// see https://github.com/noot/schnorr-verify for implementation details
contract Schnorr {
// secp256k1 group order
uint256 constant public Q =
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141;
// parity := public key y-coord parity (27 or 28)
// px := public key x-coord
// message := 32-byte message
// s := schnorr signature
// e := schnorr signature challenge
function verify(
uint8 parity,
bytes32 px,
bytes32 message,
bytes32 s,
bytes32 e
) public view returns (bool) {
// ecrecover = (m, v, r, s);
bytes32 sp = bytes32(Q - mulmod(uint256(s), uint256(px), Q));
bytes32 ep = bytes32(Q - mulmod(uint256(e), uint256(px), Q));
require(sp != 0);
// the ecrecover precompile implementation checks that the `r` and `s`
// inputs are non-zero (in this case, `px` and `ep`), thus we don't need to
// check if they're zero.will make me
address R = ecrecover(sp, parity, px, ep);
require(R != address(0), "ecrecover failed");
return e == keccak256(
abi.encodePacked(R, uint8(parity), px, block.chainid, message)
);
}
}

View File

@@ -1,52 +0,0 @@
use crate::crypto::ProcessedSignature;
use ethers::{contract::ContractFactory, prelude::*, solc::artifacts::contract::ContractBytecode};
use eyre::{eyre, Result};
use std::fs::File;
use std::sync::Arc;
use thiserror::Error;
#[derive(Error, Debug)]
pub enum EthereumError {
#[error("failed to verify Schnorr signature")]
VerificationError,
}
abigen!(
Schnorr,
"./artifacts/Schnorr.sol/Schnorr.json",
event_derives(serde::Deserialize, serde::Serialize),
);
pub async fn deploy_schnorr_verifier_contract(
client: Arc<SignerMiddleware<Provider<Http>, LocalWallet>>,
) -> Result<Schnorr<SignerMiddleware<Provider<Http>, LocalWallet>>> {
let path = "./artifacts/Schnorr.sol/Schnorr.json";
let artifact: ContractBytecode = serde_json::from_reader(File::open(path).unwrap()).unwrap();
let abi = artifact.abi.unwrap();
let bin = artifact.bytecode.unwrap().object;
let factory = ContractFactory::new(abi, bin.into_bytes().unwrap(), client.clone());
let contract = factory.deploy(())?.send().await?;
let contract = Schnorr::new(contract.address(), client);
Ok(contract)
}
pub async fn call_verify(
contract: &Schnorr<SignerMiddleware<Provider<Http>, LocalWallet>>,
params: &ProcessedSignature,
) -> Result<()> {
if contract
.verify(
params.parity + 27,
params.px.to_bytes().into(),
params.message,
params.s.to_bytes().into(),
params.e.to_bytes().into(),
)
.call()
.await?
{
Ok(())
} else {
Err(eyre!(EthereumError::VerificationError))
}
}

View File

@@ -1,107 +0,0 @@
use sha3::{Digest, Keccak256};
use group::Group;
use k256::{
elliptic_curve::{
bigint::ArrayEncoding, ops::Reduce, point::DecompressPoint, sec1::ToEncodedPoint,
},
AffinePoint, ProjectivePoint, Scalar, U256,
};
use frost::{algorithm::Hram, curve::Secp256k1};
pub fn keccak256(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).try_into().unwrap()
}
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar::reduce(U256::from_be_slice(&keccak256(data)))
}
pub fn address(point: &ProjectivePoint) -> [u8; 20] {
let encoded_point = point.to_encoded_point(false);
keccak256(&encoded_point.as_ref()[1 .. 65])[12 .. 32].try_into().unwrap()
}
pub fn ecrecover(message: Scalar, v: u8, r: Scalar, s: Scalar) -> Option<[u8; 20]> {
if r.is_zero().into() || s.is_zero().into() {
return None;
}
#[allow(non_snake_case)]
let R = AffinePoint::decompress(&r.to_bytes(), v.into());
#[allow(non_snake_case)]
if let Some(R) = Option::<AffinePoint>::from(R) {
#[allow(non_snake_case)]
let R = ProjectivePoint::from(R);
let r = r.invert().unwrap();
let u1 = ProjectivePoint::GENERATOR * (-message * r);
let u2 = R * (s * r);
let key: ProjectivePoint = u1 + u2;
if !bool::from(key.is_identity()) {
return Some(address(&key));
}
}
None
}
#[derive(Clone, Default)]
pub struct EthereumHram {}
impl Hram<Secp256k1> for EthereumHram {
#[allow(non_snake_case)]
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
let a_encoded_point = A.to_encoded_point(true);
let mut a_encoded = a_encoded_point.as_ref().to_owned();
a_encoded[0] += 25; // Ethereum uses 27/28 for point parity
let mut data = address(R).to_vec();
data.append(&mut a_encoded);
data.append(&mut m.to_vec());
Scalar::reduce(U256::from_be_slice(&keccak256(&data)))
}
}
pub struct ProcessedSignature {
pub s: Scalar,
pub px: Scalar,
pub parity: u8,
pub message: [u8; 32],
pub e: Scalar,
}
#[allow(non_snake_case)]
pub fn preprocess_signature_for_ecrecover(
m: [u8; 32],
R: &ProjectivePoint,
s: Scalar,
A: &ProjectivePoint,
chain_id: U256,
) -> (Scalar, Scalar) {
let processed_sig = process_signature_for_contract(m, R, s, A, chain_id);
let sr = processed_sig.s.mul(&processed_sig.px).negate();
let er = processed_sig.e.mul(&processed_sig.px).negate();
(sr, er)
}
#[allow(non_snake_case)]
pub fn process_signature_for_contract(
m: [u8; 32],
R: &ProjectivePoint,
s: Scalar,
A: &ProjectivePoint,
chain_id: U256,
) -> ProcessedSignature {
let encoded_pk = A.to_encoded_point(true);
let px = &encoded_pk.as_ref()[1 .. 33];
let px_scalar = Scalar::reduce(U256::from_be_slice(px));
let e = EthereumHram::hram(R, A, &[chain_id.to_be_byte_array().as_slice(), &m].concat());
ProcessedSignature {
s,
px: px_scalar,
parity: &encoded_pk.as_ref()[0] - 2,
#[allow(non_snake_case)]
message: m,
e,
}
}

View File

@@ -1,2 +0,0 @@
pub mod contract;
pub mod crypto;

View File

@@ -1,71 +0,0 @@
use std::{convert::TryFrom, sync::Arc, time::Duration};
use rand_core::OsRng;
use ::k256::{elliptic_curve::bigint::ArrayEncoding, U256};
use ethers::{
prelude::*,
utils::{keccak256, Anvil, AnvilInstance},
};
use frost::{
curve::Secp256k1,
Participant,
algorithm::IetfSchnorr,
tests::{key_gen, algorithm_machines, sign},
};
use ethereum_serai::{
crypto,
contract::{Schnorr, call_verify, deploy_schnorr_verifier_contract},
};
async fn deploy_test_contract(
) -> (u32, AnvilInstance, Schnorr<SignerMiddleware<Provider<Http>, LocalWallet>>) {
let anvil = Anvil::new().spawn();
let wallet: LocalWallet = anvil.keys()[0].clone().into();
let provider =
Provider::<Http>::try_from(anvil.endpoint()).unwrap().interval(Duration::from_millis(10u64));
let chain_id = provider.get_chainid().await.unwrap().as_u32();
let client = Arc::new(SignerMiddleware::new_with_provider_chain(provider, wallet).await.unwrap());
(chain_id, anvil, deploy_schnorr_verifier_contract(client).await.unwrap())
}
#[tokio::test]
async fn test_deploy_contract() {
deploy_test_contract().await;
}
#[tokio::test]
async fn test_ecrecover_hack() {
let (chain_id, _anvil, contract) = deploy_test_contract().await;
let chain_id = U256::from(chain_id);
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, crypto::EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
algo.clone(),
keys.clone(),
algorithm_machines(&mut OsRng, algo, &keys),
full_message,
);
let mut processed_sig =
crypto::process_signature_for_contract(hashed_message, &sig.R, sig.s, &group_key, chain_id);
call_verify(&contract, &processed_sig).await.unwrap();
// test invalid signature fails
processed_sig.message[0] = 0;
assert!(call_verify(&contract, &processed_sig).await.is_err());
}

View File

@@ -1,92 +0,0 @@
use k256::{
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, sec1::ToEncodedPoint},
ProjectivePoint, Scalar, U256,
};
use frost::{curve::Secp256k1, Participant};
use ethereum_serai::crypto::*;
#[test]
fn test_ecrecover() {
use rand_core::OsRng;
use sha2::Sha256;
use sha3::{Digest, Keccak256};
use k256::ecdsa::{hazmat::SignPrimitive, signature::DigestVerifier, SigningKey, VerifyingKey};
let private = SigningKey::random(&mut OsRng);
let public = VerifyingKey::from(&private);
const MESSAGE: &[u8] = b"Hello, World!";
let (sig, recovery_id) = private
.as_nonzero_scalar()
.try_sign_prehashed_rfc6979::<Sha256>(&Keccak256::digest(MESSAGE), b"")
.unwrap();
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
{
assert_eq!(public.verify_digest(Keccak256::new_with_prefix(MESSAGE), &sig).unwrap(), ());
}
assert_eq!(
ecrecover(hash_to_scalar(MESSAGE), recovery_id.unwrap().is_y_odd().into(), *sig.r(), *sig.s())
.unwrap(),
address(&ProjectivePoint::from(public.as_affine()))
);
}
#[test]
fn test_signing() {
use frost::{
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let _group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let _sig = sign(
&mut OsRng,
algo,
keys.clone(),
algorithm_machines(&mut OsRng, IetfSchnorr::<Secp256k1, EthereumHram>::ietf(), &keys),
MESSAGE,
);
}
#[test]
fn test_ecrecover_hack() {
use frost::{
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&Participant::new(1).unwrap()].group_key();
let group_key_encoded = group_key.to_encoded_point(true);
let group_key_compressed = group_key_encoded.as_ref();
let group_key_x = Scalar::reduce(U256::from_be_slice(&group_key_compressed[1 .. 33]));
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let chain_id = U256::ONE;
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
algo.clone(),
keys.clone(),
algorithm_machines(&mut OsRng, algo, &keys),
full_message,
);
let (sr, er) =
preprocess_signature_for_ecrecover(hashed_message, &sig.R, sig.s, &group_key, chain_id);
let q = ecrecover(sr, group_key_compressed[0] - 2, group_key_x, er).unwrap();
assert_eq!(q, address(&sig.R));
}

View File

@@ -1,2 +0,0 @@
mod contract;
mod crypto;

View File

@@ -1,107 +0,0 @@
[package]
name = "monero-serai"
version = "0.1.4-alpha"
description = "A modern Monero transaction library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
std-shims = { path = "../../common/std-shims", version = "0.1", default-features = false }
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", optional = true }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
subtle = { version = "^2.4", default-features = false }
rand_core = { version = "0.6", default-features = false }
# Used to send transactions
rand = { version = "0.8", default-features = false }
rand_chacha = { version = "0.3", default-features = false }
# Used to select decoys
rand_distr = { version = "0.4", default-features = false }
crc = { version = "3", default-features = false }
sha3 = { version = "0.10", default-features = false }
curve25519-dalek = { version = "^3.2", default-features = false }
# Used for the hash to curve, along with the more complicated proofs
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.3", default-features = false }
multiexp = { path = "../../crypto/multiexp", version = "0.3", default-features = false, features = ["batch"] }
# Needed for multisig
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.3", features = ["serialize"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["ed25519"], optional = true }
monero-generators = { path = "generators", version = "0.3", default-features = false }
futures = { version = "0.3", default-features = false, features = ["alloc"], optional = true }
hex-literal = "0.4"
hex = { version = "0.4", default-features = false, features = ["alloc"] }
serde = { version = "1", default-features = false, features = ["derive"] }
serde_json = { version = "1", default-features = false, features = ["alloc"] }
base58-monero = { version = "1", git = "https://github.com/monero-rs/base58-monero", rev = "5045e8d2b817b3b6c1190661f504e879bc769c29", default-features = false, features = ["check"] }
# Used for the provided RPC
digest_auth = { version = "0.3", optional = true }
reqwest = { version = "0.11", features = ["json"], optional = true }
# Used for the binaries
tokio = { version = "1", features = ["full"], optional = true }
[build-dependencies]
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.3", default-features = false }
monero-generators = { path = "generators", version = "0.3", default-features = false }
[dev-dependencies]
tokio = { version = "1", features = ["full"] }
monero-rpc = "0.3"
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["tests"] }
[features]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"subtle/std",
"rand_core/std",
"rand_chacha/std",
"rand/std",
"rand_distr/std",
"sha3/std",
"curve25519-dalek/std",
"multiexp/std",
"monero-generators/std",
"futures/std",
"hex/std",
"serde/std",
"serde_json/std",
]
http_rpc = ["digest_auth", "reqwest"]
multisig = ["transcript", "frost", "dleq", "std"]
binaries = ["tokio"]
experimental = []
default = ["std", "http_rpc"]

View File

@@ -1,49 +0,0 @@
# monero-serai
A modern Monero transaction library intended for usage in wallets. It prides
itself on accuracy, correctness, and removing common pit falls developers may
face.
monero-serai also offers the following features:
- Featured Addresses
- A FROST-based multisig orders of magnitude more performant than Monero's
### Purpose and support
monero-serai was written for Serai, a decentralized exchange aiming to support
Monero. Despite this, monero-serai is intended to be a widely usable library,
accurate to Monero. monero-serai guarantees the functionality needed for Serai,
yet will not deprive functionality from other users.
Various legacy transaction formats are not currently implemented, yet we are
willing to add support for them. There aren't active development efforts around
them however.
### Caveats
This library DOES attempt to do the following:
- Create on-chain transactions identical to how wallet2 would (unless told not
to)
- Not be detectable as monero-serai when scanning outputs
- Not reveal spent outputs to the connected RPC node
This library DOES NOT attempt to do the following:
- Have identical RPC behavior when creating transactions
- Be a wallet
This means that monero-serai shouldn't be fingerprintable on-chain. It also
shouldn't be fingerprintable if a targeted attack occurs to detect if the
receiving wallet is monero-serai or wallet2. It also should be generally safe
for usage with remote nodes.
It won't hide from remote nodes it's monero-serai however, potentially
allowing a remote node to profile you. The implications of this are left to the
user to consider.
It also won't act as a wallet, just as a transaction library. wallet2 has
several *non-transaction-level* policies, such as always attempting to use two
inputs to create transactions. These are considered out of scope to
monero-serai.

View File

@@ -1,67 +0,0 @@
use std::{
io::Write,
env,
path::Path,
fs::{File, remove_file},
};
use dalek_ff_group::EdwardsPoint;
use monero_generators::bulletproofs_generators;
fn serialize(generators_string: &mut String, points: &[EdwardsPoint]) {
for generator in points {
generators_string.extend(
format!(
"
dalek_ff_group::EdwardsPoint(
curve25519_dalek::edwards::CompressedEdwardsY({:?}).decompress().unwrap()
),
",
generator.compress().to_bytes()
)
.chars(),
);
}
}
fn generators(prefix: &'static str, path: &str) {
let generators = bulletproofs_generators(prefix.as_bytes());
#[allow(non_snake_case)]
let mut G_str = String::new();
serialize(&mut G_str, &generators.G);
#[allow(non_snake_case)]
let mut H_str = String::new();
serialize(&mut H_str, &generators.H);
let path = Path::new(&env::var("OUT_DIR").unwrap()).join(path);
let _ = remove_file(&path);
File::create(&path)
.unwrap()
.write_all(
format!(
"
pub static GENERATORS_CELL: OnceLock<Generators> = OnceLock::new();
pub fn GENERATORS() -> &'static Generators {{
GENERATORS_CELL.get_or_init(|| Generators {{
G: [
{G_str}
],
H: [
{H_str}
],
}})
}}
",
)
.as_bytes(),
)
.unwrap();
}
fn main() {
println!("cargo:rerun-if-changed=build.rs");
generators("bulletproof", "generators.rs");
generators("bulletproof_plus", "generators_plus.rs");
}

View File

@@ -1,28 +0,0 @@
[package]
name = "monero-generators"
version = "0.3.0"
description = "Monero's hash_to_point and generators"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero/generators"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
std-shims = { path = "../../../common/std-shims", version = "0.1", default-features = false }
subtle = { version = "^2.4", default-features = false }
sha3 = { version = "0.10", default-features = false }
curve25519-dalek = { version = "3", default-features = false }
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../../crypto/dalek-ff-group", version = "0.3" }
[features]
std = ["std-shims/std"]
default = ["std"]

View File

@@ -1,7 +0,0 @@
# Monero Generators
Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
`hash_to_point` here, is included, as needed to generate generators.
This library is usable under no_std when the `alloc` feature is enabled.

View File

@@ -1,80 +0,0 @@
//! Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
//!
//! An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
//! `hash_to_point` here, is included, as needed to generate generators.
#![cfg_attr(not(feature = "std"), no_std)]
use std_shims::sync::OnceLock;
use sha3::{Digest, Keccak256};
use curve25519_dalek::edwards::{EdwardsPoint as DalekPoint, CompressedEdwardsY};
use group::{Group, GroupEncoding};
use dalek_ff_group::EdwardsPoint;
mod varint;
use varint::write_varint;
mod hash_to_point;
pub use hash_to_point::hash_to_point;
fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
static H_CELL: OnceLock<DalekPoint> = OnceLock::new();
/// Monero's alternate generator `H`, used for amounts in Pedersen commitments.
#[allow(non_snake_case)]
pub fn H() -> DalekPoint {
*H_CELL.get_or_init(|| {
CompressedEdwardsY(hash(&EdwardsPoint::generator().to_bytes()))
.decompress()
.unwrap()
.mul_by_cofactor()
})
}
static H_POW_2_CELL: OnceLock<[DalekPoint; 64]> = OnceLock::new();
/// Monero's alternate generator `H`, multiplied by 2**i for i in 1 ..= 64.
#[allow(non_snake_case)]
pub fn H_pow_2() -> &'static [DalekPoint; 64] {
H_POW_2_CELL.get_or_init(|| {
let mut res = [H(); 64];
for i in 1 .. 64 {
res[i] = res[i - 1] + res[i - 1];
}
res
})
}
const MAX_M: usize = 16;
const N: usize = 64;
const MAX_MN: usize = MAX_M * N;
/// Container struct for Bulletproofs(+) generators.
#[allow(non_snake_case)]
pub struct Generators {
pub G: [EdwardsPoint; MAX_MN],
pub H: [EdwardsPoint; MAX_MN],
}
/// Generate generators as needed for Bulletproofs(+), as Monero does.
pub fn bulletproofs_generators(dst: &'static [u8]) -> Generators {
let mut res =
Generators { G: [EdwardsPoint::identity(); MAX_MN], H: [EdwardsPoint::identity(); MAX_MN] };
for i in 0 .. MAX_MN {
let i = 2 * i;
let mut even = H().compress().to_bytes().to_vec();
even.extend(dst);
let mut odd = even.clone();
write_varint(&i.try_into().unwrap(), &mut even).unwrap();
write_varint(&(i + 1).try_into().unwrap(), &mut odd).unwrap();
res.H[i / 2] = EdwardsPoint(hash_to_point(hash(&even)));
res.G[i / 2] = EdwardsPoint(hash_to_point(hash(&odd)));
}
res
}

View File

@@ -1,18 +0,0 @@
use std_shims::io::{self, Write};
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
#[allow(clippy::trivially_copy_pass_by_ref)] // &u64 is needed for API consistency
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {
let mut varint = *varint;
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
if varint != 0 {
b |= VARINT_CONTINUATION_MASK;
}
w.write_all(&[b])?;
varint != 0
} {}
Ok(())
}

View File

@@ -1,155 +0,0 @@
use std::sync::Arc;
use serde::Deserialize;
use serde_json::json;
use monero_serai::{
transaction::Transaction,
block::Block,
rpc::{Rpc, HttpRpc},
};
use tokio::task::JoinHandle;
async fn check_block(rpc: Arc<Rpc<HttpRpc>>, block_i: usize) {
let hash = rpc.get_block_hash(block_i).await.expect("couldn't get block {block_i}'s hash");
// TODO: Grab the JSON to also check it was deserialized correctly
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse = rpc
.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) })))
.await
.expect("couldn't get block {block} via block.hash()");
let blob = hex::decode(res.blob).expect("node returned non-hex block");
let block = Block::read(&mut blob.as_slice()).expect("couldn't deserialize block {block_i}");
assert_eq!(block.hash(), hash, "hash differs");
assert_eq!(block.serialize(), blob, "serialization differs");
let txs_len = 1 + block.txs.len();
if !block.txs.is_empty() {
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
let mut hashes_hex = block.txs.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = vec![];
while !hashes_hex.is_empty() {
let txs: TransactionsResponse = rpc
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. hashes_hex.len().min(100)).collect::<Vec<_>>(),
})),
)
.await
.expect("couldn't call get_transactions");
assert!(txs.missed_tx.is_empty());
all_txs.extend(txs.txs);
}
for (tx_hash, tx_res) in block.txs.into_iter().zip(all_txs.into_iter()) {
assert_eq!(
tx_res.tx_hash,
hex::encode(tx_hash),
"node returned a transaction with different hash"
);
let tx = Transaction::read(
&mut hex::decode(&tx_res.as_hex).expect("node returned non-hex transaction").as_slice(),
)
.expect("couldn't deserialize transaction");
assert_eq!(
hex::encode(tx.serialize()),
tx_res.as_hex,
"Transaction serialization was different"
);
assert_eq!(tx.hash(), tx_hash, "Transaction hash was different");
}
}
println!("Deserialized, hashed, and reserialized {block_i} with {} TXs", txs_len);
}
#[tokio::main]
async fn main() {
let args = std::env::args().collect::<Vec<String>>();
// Read start block as the first arg
let mut block_i = args[1].parse::<usize>().expect("invalid start block");
// How many blocks to work on at once
let async_parallelism: usize =
args.get(2).unwrap_or(&"8".to_string()).parse::<usize>().expect("invalid parallelism argument");
// Read further args as RPC URLs
let default_nodes = vec![
"http://xmr-node.cakewallet.com:18081".to_string(),
"https://node.sethforprivacy.com".to_string(),
];
let mut specified_nodes = vec![];
{
let mut i = 0;
loop {
let Some(node) = args.get(3 + i) else { break };
specified_nodes.push(node.clone());
i += 1;
}
}
let nodes = if specified_nodes.is_empty() { default_nodes } else { specified_nodes };
let rpc = |url: String| {
HttpRpc::new(url.clone())
.unwrap_or_else(|_| panic!("couldn't create HttpRpc connected to {url}"))
};
let main_rpc = rpc(nodes[0].clone());
let mut rpcs = vec![];
for i in 0 .. async_parallelism {
rpcs.push(Arc::new(rpc(nodes[i % nodes.len()].clone())));
}
let mut rpc_i = 0;
let mut handles: Vec<JoinHandle<()>> = vec![];
let mut height = 0;
loop {
let new_height = main_rpc.get_height().await.expect("couldn't call get_height");
if new_height == height {
break;
}
height = new_height;
while block_i < height {
if handles.len() >= async_parallelism {
// Guarantee one handle is complete
handles.swap_remove(0).await.unwrap();
// Remove all of the finished handles
let mut i = 0;
while i < handles.len() {
if handles[i].is_finished() {
handles.swap_remove(i).await.unwrap();
continue;
}
i += 1;
}
}
handles.push(tokio::spawn(check_block(rpcs[rpc_i].clone(), block_i)));
rpc_i = (rpc_i + 1) % rpcs.len();
block_i += 1;
}
}
}

View File

@@ -1,116 +0,0 @@
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use crate::{
hash,
merkle::merkle_root,
serialize::*,
transaction::{Input, Transaction},
};
const CORRECT_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("426d16cff04c71f8b16340b722dc4010a2dd3831c22041431f772547ba6e331a");
const EXISTING_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("bbd604d2ba11ba27935e006ed39c9bfdd99b76bf4a50654bc1e1e61217962698");
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BlockHeader {
pub major_version: u64,
pub minor_version: u64,
pub timestamp: u64,
pub previous: [u8; 32],
pub nonce: u32,
}
impl BlockHeader {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.major_version, w)?;
write_varint(&self.minor_version, w)?;
write_varint(&self.timestamp, w)?;
w.write_all(&self.previous)?;
w.write_all(&self.nonce.to_le_bytes())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
major_version: read_varint(r)?,
minor_version: read_varint(r)?,
timestamp: read_varint(r)?,
previous: read_bytes(r)?,
nonce: read_bytes(r).map(u32::from_le_bytes)?,
})
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Block {
pub header: BlockHeader,
pub miner_tx: Transaction,
pub txs: Vec<[u8; 32]>,
}
impl Block {
pub fn number(&self) -> usize {
match self.miner_tx.prefix.inputs.get(0) {
Some(Input::Gen(number)) => (*number).try_into().unwrap(),
_ => panic!("invalid block, miner TX didn't have a Input::Gen"),
}
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.header.write(w)?;
self.miner_tx.write(w)?;
write_varint(&self.txs.len().try_into().unwrap(), w)?;
for tx in &self.txs {
w.write_all(tx)?;
}
Ok(())
}
fn tx_merkle_root(&self) -> [u8; 32] {
merkle_root(self.miner_tx.hash(), &self.txs)
}
fn serialize_hashable(&self) -> Vec<u8> {
let mut blob = self.header.serialize();
blob.extend_from_slice(&self.tx_merkle_root());
write_varint(&(1 + u64::try_from(self.txs.len()).unwrap()), &mut blob).unwrap();
let mut out = Vec::with_capacity(8 + blob.len());
write_varint(&u64::try_from(blob.len()).unwrap(), &mut out).unwrap();
out.append(&mut blob);
out
}
pub fn hash(&self) -> [u8; 32] {
let hash = hash(&self.serialize_hashable());
if hash == CORRECT_BLOCK_HASH_202612 {
return EXISTING_BLOCK_HASH_202612;
};
hash
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
header: BlockHeader::read(r)?,
miner_tx: Transaction::read(r)?,
txs: (0 .. read_varint(r)?).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
})
}
}

View File

@@ -1,178 +0,0 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
#[macro_use]
extern crate alloc;
use std_shims::{sync::OnceLock, io};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use sha3::{Digest, Keccak256};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub use monero_generators::H;
mod merkle;
mod serialize;
use serialize::{read_byte, read_u16};
/// RingCT structs and functionality.
pub mod ringct;
use ringct::RctType;
/// Transaction structs.
pub mod transaction;
/// Block structs.
pub mod block;
/// Monero daemon RPC interface.
pub mod rpc;
/// Wallet functionality, enabling scanning and sending transactions.
pub mod wallet;
#[cfg(test)]
mod tests;
static INV_EIGHT_CELL: OnceLock<Scalar> = OnceLock::new();
#[allow(non_snake_case)]
pub(crate) fn INV_EIGHT() -> Scalar {
*INV_EIGHT_CELL.get_or_init(|| Scalar::from(8u8).invert())
}
/// Monero protocol version.
///
/// v15 is omitted as v15 was simply v14 and v16 being active at the same time, with regards to the
/// transactions supported. Accordingly, v16 should be used during v15.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[allow(non_camel_case_types)]
pub enum Protocol {
v14,
v16,
Custom { ring_len: usize, bp_plus: bool, optimal_rct_type: RctType },
}
impl Protocol {
/// Amount of ring members under this protocol version.
pub const fn ring_len(&self) -> usize {
match self {
Self::v14 => 11,
Self::v16 => 16,
Self::Custom { ring_len, .. } => *ring_len,
}
}
/// Whether or not the specified version uses Bulletproofs or Bulletproofs+.
///
/// This method will likely be reworked when versions not using Bulletproofs at all are added.
pub const fn bp_plus(&self) -> bool {
match self {
Self::v14 => false,
Self::v16 => true,
Self::Custom { bp_plus, .. } => *bp_plus,
}
}
// TODO: Make this an Option when we support pre-RCT protocols
pub const fn optimal_rct_type(&self) -> RctType {
match self {
Self::v14 => RctType::Clsag,
Self::v16 => RctType::BulletproofsPlus,
Self::Custom { optimal_rct_type, .. } => *optimal_rct_type,
}
}
pub(crate) fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::v14 => w.write_all(&[0, 14]),
Self::v16 => w.write_all(&[0, 16]),
Self::Custom { ring_len, bp_plus, optimal_rct_type } => {
// Custom, version 0
w.write_all(&[1, 0])?;
w.write_all(&u16::try_from(*ring_len).unwrap().to_le_bytes())?;
w.write_all(&[u8::from(*bp_plus)])?;
w.write_all(&[optimal_rct_type.to_byte()])
}
}
}
pub(crate) fn read<R: io::Read>(r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
// Monero protocol
0 => match read_byte(r)? {
14 => Self::v14,
16 => Self::v16,
_ => Err(io::Error::new(io::ErrorKind::Other, "unrecognized monero protocol"))?,
},
// Custom
1 => match read_byte(r)? {
0 => Self::Custom {
ring_len: read_u16(r)?.into(),
bp_plus: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid bool serialization"))?,
},
optimal_rct_type: RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid RctType serialization"))?,
},
_ => {
Err(io::Error::new(io::ErrorKind::Other, "unrecognized custom protocol serialization"))?
}
},
_ => Err(io::Error::new(io::ErrorKind::Other, "unrecognized protocol serialization"))?,
})
}
}
/// Transparent structure representing a Pedersen commitment's contents.
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct Commitment {
pub mask: Scalar,
pub amount: u64,
}
impl Commitment {
/// A commitment to zero, defined with a mask of 1 (as to not be the identity).
pub fn zero() -> Self {
Self { mask: Scalar::one(), amount: 0 }
}
pub fn new(mask: Scalar, amount: u64) -> Self {
Self { mask, amount }
}
/// Calculate a Pedersen commitment, as a point, from the transparent structure.
pub fn calculate(&self) -> EdwardsPoint {
(&self.mask * &ED25519_BASEPOINT_TABLE) + (Scalar::from(self.amount) * H())
}
}
/// Support generating a random scalar using a modern rand, as dalek's is notoriously dated.
pub fn random_scalar<R: RngCore + CryptoRng>(rng: &mut R) -> Scalar {
let mut r = [0; 64];
rng.fill_bytes(&mut r);
Scalar::from_bytes_mod_order_wide(&r)
}
pub(crate) fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
/// Hash the provided data to a scalar via keccak256(data) % l.
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
let scalar = Scalar::from_bytes_mod_order(hash(data));
// Monero will explicitly error in this case
// This library acknowledges its practical impossibility of it occurring, and doesn't bother to
// code in logic to handle it. That said, if it ever occurs, something must happen in order to
// not generate/verify a proof we believe to be valid when it isn't
assert!(scalar != Scalar::zero(), "ZERO HASH: {data:?}");
scalar
}

View File

@@ -1,102 +0,0 @@
use core::fmt::Debug;
use std_shims::io::{self, Read, Write};
use curve25519_dalek::edwards::EdwardsPoint;
#[cfg(feature = "experimental")]
use curve25519_dalek::{traits::Identity, scalar::Scalar};
#[cfg(feature = "experimental")]
use monero_generators::H_pow_2;
#[cfg(feature = "experimental")]
use crate::hash_to_scalar;
use crate::serialize::*;
/// 64 Borromean ring signatures.
///
/// This type keeps the data as raw bytes as Monero has some transactions with unreduced scalars in
/// this field. While we could use `from_bytes_mod_order`, we'd then not be able to encode this
/// back into it's original form.
///
/// Those scalars also have a custom reduction algorithm...
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanSignatures {
pub s0: [[u8; 32]; 64],
pub s1: [[u8; 32]; 64],
pub ee: [u8; 32],
}
impl BorromeanSignatures {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { s0: read_array(read_bytes, r)?, s1: read_array(read_bytes, r)?, ee: read_bytes(r)? })
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for s0 in &self.s0 {
w.write_all(s0)?;
}
for s1 in &self.s1 {
w.write_all(s1)?;
}
w.write_all(&self.ee)
}
#[cfg(feature = "experimental")]
fn verify(&self, keys_a: &[EdwardsPoint], keys_b: &[EdwardsPoint]) -> bool {
let mut transcript = [0; 2048];
for i in 0 .. 64 {
// TODO: These aren't the correct reduction
// TODO: Can either of these be tightened?
#[allow(non_snake_case)]
let LL = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&Scalar::from_bytes_mod_order(self.ee),
&keys_a[i],
&Scalar::from_bytes_mod_order(self.s0[i]),
);
#[allow(non_snake_case)]
let LV = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&hash_to_scalar(LL.compress().as_bytes()),
&keys_b[i],
&Scalar::from_bytes_mod_order(self.s1[i]),
);
transcript[i .. ((i + 1) * 32)].copy_from_slice(LV.compress().as_bytes());
}
// TODO: This isn't the correct reduction
// TODO: Can this be tightened to from_canonical_bytes?
hash_to_scalar(&transcript) == Scalar::from_bytes_mod_order(self.ee)
}
}
/// A range proof premised on Borromean ring signatures.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanRange {
pub sigs: BorromeanSignatures,
pub bit_commitments: [EdwardsPoint; 64],
}
impl BorromeanRange {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { sigs: BorromeanSignatures::read(r)?, bit_commitments: read_array(read_point, r)? })
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.sigs.write(w)?;
write_raw_vec(write_point, &self.bit_commitments, w)
}
#[cfg(feature = "experimental")]
#[must_use]
pub fn verify(&self, commitment: &EdwardsPoint) -> bool {
if &self.bit_commitments.iter().sum::<EdwardsPoint>() != commitment {
return false;
}
#[allow(non_snake_case)]
let H_pow_2 = H_pow_2();
let mut commitments_sub_one = [EdwardsPoint::identity(); 64];
for i in 0 .. 64 {
commitments_sub_one[i] = self.bit_commitments[i] - H_pow_2[i];
}
self.sigs.verify(&self.bit_commitments, &commitments_sub_one)
}
}

View File

@@ -1,151 +0,0 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use subtle::{Choice, ConditionallySelectable};
use curve25519_dalek::edwards::EdwardsPoint as DalekPoint;
use group::{ff::Field, Group};
use dalek_ff_group::{Scalar, EdwardsPoint};
use multiexp::multiexp as multiexp_const;
pub(crate) use monero_generators::Generators;
use crate::{INV_EIGHT as DALEK_INV_EIGHT, H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::*;
#[inline]
pub(crate) fn INV_EIGHT() -> Scalar {
Scalar(DALEK_INV_EIGHT())
}
#[inline]
pub(crate) fn H() -> EdwardsPoint {
EdwardsPoint(DALEK_H())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
// Components common between variants
pub(crate) const MAX_M: usize = 16;
pub(crate) const LOG_N: usize = 6; // 2 << 6 == N
pub(crate) const N: usize = 64;
pub(crate) fn prove_multiexp(pairs: &[(Scalar, EdwardsPoint)]) -> EdwardsPoint {
multiexp_const(pairs) * INV_EIGHT()
}
pub(crate) fn vector_exponent(
generators: &Generators,
a: &ScalarVector,
b: &ScalarVector,
) -> EdwardsPoint {
debug_assert_eq!(a.len(), b.len());
(a * &generators.G[.. a.len()]) + (b * &generators.H[.. b.len()])
}
pub(crate) fn hash_cache(cache: &mut Scalar, mash: &[[u8; 32]]) -> Scalar {
let slice =
&[cache.to_bytes().as_ref(), mash.iter().copied().flatten().collect::<Vec<_>>().as_ref()]
.concat();
*cache = hash_to_scalar(slice);
*cache
}
pub(crate) fn MN(outputs: usize) -> (usize, usize, usize) {
let mut logM = 0;
let mut M;
while {
M = 1 << logM;
(M <= MAX_M) && (M < outputs)
} {
logM += 1;
}
(logM + LOG_N, M, M * N)
}
pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, ScalarVector) {
let (_, M, MN) = MN(commitments.len());
let sv = commitments.iter().map(|c| Scalar::from(c.amount)).collect::<Vec<_>>();
let mut aL = ScalarVector::new(MN);
let mut aR = ScalarVector::new(MN);
for j in 0 .. M {
for i in (0 .. N).rev() {
let bit =
if j < sv.len() { Choice::from((sv[j][i / 8] >> (i % 8)) & 1) } else { Choice::from(0) };
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::ZERO, &Scalar::ONE, bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::ONE, &Scalar::ZERO, bit);
}
}
(aL, aR)
}
pub(crate) fn hash_commitments<C: IntoIterator<Item = DalekPoint>>(
commitments: C,
) -> (Scalar, Vec<EdwardsPoint>) {
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * INV_EIGHT()).collect::<Vec<_>>();
(hash_to_scalar(&V.iter().flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>()), V)
}
pub(crate) fn alpha_rho<R: RngCore + CryptoRng>(
rng: &mut R,
generators: &Generators,
aL: &ScalarVector,
aR: &ScalarVector,
) -> (Scalar, EdwardsPoint) {
let ar = Scalar::random(rng);
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * INV_EIGHT())
}
pub(crate) fn LR_statements(
a: &ScalarVector,
G_i: &[EdwardsPoint],
b: &ScalarVector,
H_i: &[EdwardsPoint],
cL: Scalar,
U: EdwardsPoint,
) -> Vec<(Scalar, EdwardsPoint)> {
let mut res = a
.0
.iter()
.copied()
.zip(G_i.iter().copied())
.chain(b.0.iter().copied().zip(H_i.iter().copied()))
.collect::<Vec<_>>();
res.push((cL, U));
res
}
static TWO_N_CELL: OnceLock<ScalarVector> = OnceLock::new();
pub(crate) fn TWO_N() -> &'static ScalarVector {
TWO_N_CELL.get_or_init(|| ScalarVector::powers(Scalar::from(2u8), N))
}
pub(crate) fn challenge_products(w: &[Scalar], winv: &[Scalar]) -> Vec<Scalar> {
let mut products = vec![Scalar::ZERO; 1 << w.len()];
products[0] = winv[0];
products[1] = w[0];
for j in 1 .. w.len() {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * w[j];
products[slots - 1] = products[slots / 2] * winv[j];
slots = slots.saturating_sub(2);
}
}
// Sanity check as if the above failed to populate, it'd be critical
for w in &products {
debug_assert!(!bool::from(w.is_zero()));
}
products
}

View File

@@ -1,179 +0,0 @@
#![allow(non_snake_case)]
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
use curve25519_dalek::edwards::EdwardsPoint;
use multiexp::BatchVerifier;
use crate::{Commitment, wallet::TransactionError, serialize::*};
pub(crate) mod scalar_vector;
pub(crate) mod core;
use self::core::LOG_N;
pub(crate) mod original;
pub use original::GENERATORS as BULLETPROOFS_GENERATORS;
pub(crate) mod plus;
pub use plus::GENERATORS as BULLETPROOFS_PLUS_GENERATORS;
pub(crate) use self::original::OriginalStruct;
pub(crate) use self::plus::PlusStruct;
pub(crate) const MAX_OUTPUTS: usize = self::core::MAX_M;
/// Bulletproofs enum, supporting the original and plus formulations.
#[allow(clippy::large_enum_variant)]
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Bulletproofs {
Original(OriginalStruct),
Plus(PlusStruct),
}
impl Bulletproofs {
pub(crate) fn fee_weight(plus: bool, outputs: usize) -> usize {
let fields = if plus { 6 } else { 9 };
// TODO: Shouldn't this use u32/u64?
#[allow(non_snake_case)]
let mut LR_len = usize::try_from(usize::BITS - (outputs - 1).leading_zeros()).unwrap();
let padded_outputs = 1 << LR_len;
LR_len += LOG_N;
let len = (fields + (2 * LR_len)) * 32;
len +
if padded_outputs <= 2 {
0
} else {
let base = ((fields + (2 * (LOG_N + 1))) * 32) / 2;
let size = (fields + (2 * LR_len)) * 32;
((base * padded_outputs) - size) * 4 / 5
}
}
/// Prove the list of commitments are within [0 .. 2^64).
pub fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
outputs: &[Commitment],
plus: bool,
) -> Result<Self, TransactionError> {
if outputs.len() > MAX_OUTPUTS {
return Err(TransactionError::TooManyOutputs)?;
}
Ok(if !plus {
Self::Plus(PlusStruct::prove(rng, outputs))
} else {
Self::Original(OriginalStruct::prove(rng, outputs))
})
}
/// Verify the given Bulletproofs.
#[must_use]
pub fn verify<R: RngCore + CryptoRng>(&self, rng: &mut R, commitments: &[EdwardsPoint]) -> bool {
match self {
Self::Original(bp) => bp.verify(rng, commitments),
Self::Plus(bp) => bp.verify(rng, commitments),
}
}
/// Accumulate the verification for the given Bulletproofs into the specified BatchVerifier.
/// Returns false if the Bulletproofs aren't sane, without mutating the BatchVerifier.
/// Returns true if the Bulletproofs are sane, regardless of their validity.
#[must_use]
pub fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, dalek_ff_group::EdwardsPoint>,
id: ID,
commitments: &[EdwardsPoint],
) -> bool {
match self {
Self::Original(bp) => bp.batch_verify(rng, verifier, id, commitments),
Self::Plus(bp) => bp.batch_verify(rng, verifier, id, commitments),
}
}
fn write_core<W: Write, F: Fn(&[EdwardsPoint], &mut W) -> io::Result<()>>(
&self,
w: &mut W,
specific_write_vec: F,
) -> io::Result<()> {
match self {
Self::Original(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.S, w)?;
write_point(&bp.T1, w)?;
write_point(&bp.T2, w)?;
write_scalar(&bp.taux, w)?;
write_scalar(&bp.mu, w)?;
specific_write_vec(&bp.L, w)?;
specific_write_vec(&bp.R, w)?;
write_scalar(&bp.a, w)?;
write_scalar(&bp.b, w)?;
write_scalar(&bp.t, w)
}
Self::Plus(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.A1, w)?;
write_point(&bp.B, w)?;
write_scalar(&bp.r1, w)?;
write_scalar(&bp.s1, w)?;
write_scalar(&bp.d1, w)?;
specific_write_vec(&bp.L, w)?;
specific_write_vec(&bp.R, w)
}
}
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_raw_vec(write_point, points, w))
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_vec(write_point, points, w))
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
/// Read Bulletproofs.
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self::Original(OriginalStruct {
A: read_point(r)?,
S: read_point(r)?,
T1: read_point(r)?,
T2: read_point(r)?,
taux: read_scalar(r)?,
mu: read_scalar(r)?,
L: read_vec(read_point, r)?,
R: read_vec(read_point, r)?,
a: read_scalar(r)?,
b: read_scalar(r)?,
t: read_scalar(r)?,
}))
}
/// Read Bulletproofs+.
pub fn read_plus<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self::Plus(PlusStruct {
A: read_point(r)?,
A1: read_point(r)?,
B: read_point(r)?,
r1: read_scalar(r)?,
s1: read_scalar(r)?,
d1: read_scalar(r)?,
L: read_vec(read_point, r)?,
R: read_vec(read_point, r)?,
}))
}
}

View File

@@ -1,308 +0,0 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
use curve25519_dalek::{scalar::Scalar as DalekScalar, edwards::EdwardsPoint as DalekPoint};
use group::{ff::Field, Group};
use dalek_ff_group::{ED25519_BASEPOINT_POINT as G, Scalar, EdwardsPoint};
use multiexp::BatchVerifier;
use crate::{Commitment, ringct::bulletproofs::core::*};
include!(concat!(env!("OUT_DIR"), "/generators.rs"));
static IP12_CELL: OnceLock<Scalar> = OnceLock::new();
pub(crate) fn IP12() -> Scalar {
*IP12_CELL.get_or_init(|| inner_product(&ScalarVector(vec![Scalar::ONE; N]), TWO_N()))
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct OriginalStruct {
pub(crate) A: DalekPoint,
pub(crate) S: DalekPoint,
pub(crate) T1: DalekPoint,
pub(crate) T2: DalekPoint,
pub(crate) taux: DalekScalar,
pub(crate) mu: DalekScalar,
pub(crate) L: Vec<DalekPoint>,
pub(crate) R: Vec<DalekPoint>,
pub(crate) a: DalekScalar,
pub(crate) b: DalekScalar,
pub(crate) t: DalekScalar,
}
impl OriginalStruct {
#[allow(clippy::many_single_char_names)]
pub(crate) fn prove<R: RngCore + CryptoRng>(rng: &mut R, commitments: &[Commitment]) -> Self {
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
let commitments_points = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
let (mut cache, _) = hash_commitments(commitments_points.clone());
let (sL, sR) =
ScalarVector((0 .. (MN * 2)).map(|_| Scalar::random(&mut *rng)).collect::<Vec<_>>()).split();
let generators = GENERATORS();
let (mut alpha, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, generators, &sL, &sR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes(), S.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
let z = cache;
let l0 = &aL - z;
let l1 = sL;
let mut zero_twos = Vec::with_capacity(MN);
let zpow = ScalarVector::powers(z, M + 2);
for j in 0 .. M {
for i in 0 .. N {
zero_twos.push(zpow[j + 2] * TWO_N()[i]);
}
}
let yMN = ScalarVector::powers(y, MN);
let r0 = (&(aR + z) * &yMN) + ScalarVector(zero_twos);
let r1 = yMN * sR;
let (T1, T2, x, mut taux) = {
let t1 = inner_product(&l0, &r1) + inner_product(&l1, &r0);
let t2 = inner_product(&l1, &r1);
let mut tau1 = Scalar::random(&mut *rng);
let mut tau2 = Scalar::random(&mut *rng);
let T1 = prove_multiexp(&[(t1, H()), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, H()), (tau2, EdwardsPoint::generator())]);
let x =
hash_cache(&mut cache, &[z.to_bytes(), T1.compress().to_bytes(), T2.compress().to_bytes()]);
let taux = (tau2 * (x * x)) + (tau1 * x);
tau1.zeroize();
tau2.zeroize();
(T1, T2, x, taux)
};
let mu = (x * rho) + alpha;
alpha.zeroize();
rho.zeroize();
for (i, gamma) in commitments.iter().map(|c| Scalar(c.mask)).enumerate() {
taux += zpow[i + 2] * gamma;
}
let l = &l0 + &(l1 * x);
let r = &r0 + &(r1 * x);
let t = inner_product(&l, &r);
let x_ip =
hash_cache(&mut cache, &[x.to_bytes(), taux.to_bytes(), mu.to_bytes(), t.to_bytes()]);
let mut a = l;
let mut b = r;
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
H_proof.iter_mut().zip(yinvpow.0.iter()).for_each(|(this_H, yinvpow)| *this_H *= yinvpow);
let U = H() * x_ip;
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
while a.len() != 1 {
let (aL, aR) = a.split();
let (bL, bR) = b.split();
let cL = inner_product(&aL, &bR);
let cR = inner_product(&aR, &bL);
let (G_L, G_R) = G_proof.split_at(aL.len());
let (H_L, H_R) = H_proof.split_at(aL.len());
let L_i = prove_multiexp(&LR_statements(&aL, G_R, &bR, H_L, cL, U));
let R_i = prove_multiexp(&LR_statements(&aR, G_L, &bL, H_R, cR, U));
L.push(*L_i);
R.push(*R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
a = (aL * w) + (aR * winv);
b = (bL * winv) + (bR * w);
if a.len() != 1 {
G_proof = hadamard_fold(G_L, G_R, winv, w);
H_proof = hadamard_fold(H_L, H_R, w, winv);
}
}
let res = Self {
A: *A,
S: *S,
T1: *T1,
T2: *T2,
taux: *taux,
mu: *mu,
L,
R,
a: *a[0],
b: *b[0],
t: *t,
};
debug_assert!(res.verify(rng, &commitments_points));
res
}
#[allow(clippy::many_single_char_names)]
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
// Verify commitments are valid
if commitments.is_empty() || (commitments.len() > MAX_M) {
return false;
}
// Verify L and R are properly sized
if self.L.len() != self.R.len() {
return false;
}
let (logMN, M, MN) = MN(commitments.len());
if self.L.len() != logMN {
return false;
}
// Rebuild all challenges
let (mut cache, commitments) = hash_commitments(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes(), self.S.compress().to_bytes()]);
let z = hash_to_scalar(&y.to_bytes());
cache = z;
let x = hash_cache(
&mut cache,
&[z.to_bytes(), self.T1.compress().to_bytes(), self.T2.compress().to_bytes()],
);
let x_ip = hash_cache(
&mut cache,
&[x.to_bytes(), self.taux.to_bytes(), self.mu.to_bytes(), self.t.to_bytes()],
);
let mut w = Vec::with_capacity(logMN);
let mut winv = Vec::with_capacity(logMN);
for (L, R) in self.L.iter().zip(&self.R) {
w.push(hash_cache(&mut cache, &[L.compress().to_bytes(), R.compress().to_bytes()]));
winv.push(cache.invert().unwrap());
}
// Convert the proof from * INV_EIGHT to its actual form
let normalize = |point: &DalekPoint| EdwardsPoint(point.mul_by_cofactor());
let L = self.L.iter().map(normalize).collect::<Vec<_>>();
let R = self.R.iter().map(normalize).collect::<Vec<_>>();
let T1 = normalize(&self.T1);
let T2 = normalize(&self.T2);
let A = normalize(&self.A);
let S = normalize(&self.S);
let commitments = commitments.iter().map(EdwardsPoint::mul_by_cofactor).collect::<Vec<_>>();
// Verify it
let mut proof = Vec::with_capacity(4 + commitments.len());
let zpow = ScalarVector::powers(z, M + 3);
let ip1y = ScalarVector::powers(y, M * N).sum();
let mut k = -(zpow[2] * ip1y);
for j in 1 ..= M {
k -= zpow[j + 2] * IP12();
}
let y1 = Scalar(self.t) - ((z * ip1y) + k);
proof.push((-y1, H()));
proof.push((-Scalar(self.taux), G));
for (j, commitment) in commitments.iter().enumerate() {
proof.push((zpow[j + 2], *commitment));
}
proof.push((x, T1));
proof.push((x * x, T2));
verifier.queue(&mut *rng, id, proof);
proof = Vec::with_capacity(4 + (2 * (MN + logMN)));
let z3 = (Scalar(self.t) - (Scalar(self.a) * Scalar(self.b))) * x_ip;
proof.push((z3, H()));
proof.push((-Scalar(self.mu), G));
proof.push((Scalar::ONE, A));
proof.push((x, S));
{
let ypow = ScalarVector::powers(y, MN);
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let w_cache = challenge_products(&w, &winv);
let generators = GENERATORS();
for i in 0 .. MN {
let g = (Scalar(self.a) * w_cache[i]) + z;
proof.push((-g, generators.G[i]));
let mut h = Scalar(self.b) * yinvpow[i] * w_cache[(!i) & (MN - 1)];
h -= ((zpow[(i / N) + 2] * TWO_N()[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, generators.H[i]));
}
}
for i in 0 .. logMN {
proof.push((w[i] * w[i], L[i]));
proof.push((winv[i] * winv[i], R[i]));
}
verifier.queue(rng, id, proof);
true
}
#[must_use]
pub(crate) fn verify<R: RngCore + CryptoRng>(
&self,
rng: &mut R,
commitments: &[DalekPoint],
) -> bool {
let mut verifier = BatchVerifier::new(1);
if self.verify_core(rng, &mut verifier, (), commitments) {
verifier.verify_vartime()
} else {
false
}
}
#[must_use]
pub(crate) fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
self.verify_core(rng, verifier, id, commitments)
}
}

View File

@@ -1,300 +0,0 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
use curve25519_dalek::{scalar::Scalar as DalekScalar, edwards::EdwardsPoint as DalekPoint};
use group::ff::Field;
use dalek_ff_group::{ED25519_BASEPOINT_POINT as G, Scalar, EdwardsPoint};
use multiexp::BatchVerifier;
use crate::{
Commitment, hash,
ringct::{hash_to_point::raw_hash_to_point, bulletproofs::core::*},
};
include!(concat!(env!("OUT_DIR"), "/generators_plus.rs"));
static TRANSCRIPT_CELL: OnceLock<[u8; 32]> = OnceLock::new();
pub(crate) fn TRANSCRIPT() -> [u8; 32] {
*TRANSCRIPT_CELL.get_or_init(|| {
EdwardsPoint(raw_hash_to_point(hash(b"bulletproof_plus_transcript"))).compress().to_bytes()
})
}
// TRANSCRIPT isn't a Scalar, so we need this alternative for the first hash
fn hash_plus<C: IntoIterator<Item = DalekPoint>>(commitments: C) -> (Scalar, Vec<EdwardsPoint>) {
let (cache, commitments) = hash_commitments(commitments);
(hash_to_scalar(&[TRANSCRIPT().as_ref(), &cache.to_bytes()].concat()), commitments)
}
// d[j*N+i] = z**(2*(j+1)) * 2**i
fn d(z: Scalar, M: usize, MN: usize) -> (ScalarVector, ScalarVector) {
let zpow = ScalarVector::even_powers(z, 2 * M);
let mut d = vec![Scalar::ZERO; MN];
for j in 0 .. M {
for i in 0 .. N {
d[(j * N) + i] = zpow[j] * TWO_N()[i];
}
}
(zpow, ScalarVector(d))
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct PlusStruct {
pub(crate) A: DalekPoint,
pub(crate) A1: DalekPoint,
pub(crate) B: DalekPoint,
pub(crate) r1: DalekScalar,
pub(crate) s1: DalekScalar,
pub(crate) d1: DalekScalar,
pub(crate) L: Vec<DalekPoint>,
pub(crate) R: Vec<DalekPoint>,
}
impl PlusStruct {
#[allow(clippy::many_single_char_names)]
pub(crate) fn prove<R: RngCore + CryptoRng>(rng: &mut R, commitments: &[Commitment]) -> Self {
let generators = GENERATORS();
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
let commitments_points = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
let (mut cache, _) = hash_plus(commitments_points.clone());
let (mut alpha1, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
let z = cache;
let (zpow, d) = d(z, M, MN);
let aL1 = aL - z;
let ypow = ScalarVector::powers(y, MN + 2);
let mut y_for_d = ScalarVector(ypow.0[1 ..= MN].to_vec());
y_for_d.0.reverse();
let aR1 = (aR + z) + (y_for_d * d);
for (j, gamma) in commitments.iter().map(|c| Scalar(c.mask)).enumerate() {
alpha1 += zpow[j] * ypow[MN + 1] * gamma;
}
let mut a = aL1;
let mut b = aR1;
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
while a.len() != 1 {
let (aL, aR) = a.split();
let (bL, bR) = b.split();
let cL = weighted_inner_product(&aL, &bR, y);
let cR = weighted_inner_product(&(&aR * ypow[aR.len()]), &bL, y);
let (mut dL, mut dR) = (Scalar::random(&mut *rng), Scalar::random(&mut *rng));
let (G_L, G_R) = G_proof.split_at(aL.len());
let (H_L, H_R) = H_proof.split_at(aL.len());
let mut L_i = LR_statements(&(&aL * yinvpow[aL.len()]), G_R, &bR, H_L, cL, H());
L_i.push((dL, G));
let L_i = prove_multiexp(&L_i);
L.push(*L_i);
let mut R_i = LR_statements(&(&aR * ypow[aR.len()]), G_L, &bL, H_R, cR, H());
R_i.push((dR, G));
let R_i = prove_multiexp(&R_i);
R.push(*R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
G_proof = hadamard_fold(G_L, G_R, winv, w * yinvpow[aL.len()]);
H_proof = hadamard_fold(H_L, H_R, w, winv);
a = (&aL * w) + (aR * (winv * ypow[aL.len()]));
b = (bL * winv) + (bR * w);
alpha1 += (dL * (w * w)) + (dR * (winv * winv));
dL.zeroize();
dR.zeroize();
}
let mut r = Scalar::random(&mut *rng);
let mut s = Scalar::random(&mut *rng);
let mut d = Scalar::random(&mut *rng);
let mut eta = Scalar::random(&mut *rng);
let A1 = prove_multiexp(&[
(r, G_proof[0]),
(s, H_proof[0]),
(d, G),
((r * y * b[0]) + (s * y * a[0]), H()),
]);
let B = prove_multiexp(&[(r * y * s, H()), (eta, G)]);
let e = hash_cache(&mut cache, &[A1.compress().to_bytes(), B.compress().to_bytes()]);
let r1 = (a[0] * e) + r;
r.zeroize();
let s1 = (b[0] * e) + s;
s.zeroize();
let d1 = ((d * e) + eta) + (alpha1 * (e * e));
d.zeroize();
eta.zeroize();
alpha1.zeroize();
let res = Self { A: *A, A1: *A1, B: *B, r1: *r1, s1: *s1, d1: *d1, L, R };
debug_assert!(res.verify(rng, &commitments_points));
res
}
#[allow(clippy::many_single_char_names)]
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
// Verify commitments are valid
if commitments.is_empty() || (commitments.len() > MAX_M) {
return false;
}
// Verify L and R are properly sized
if self.L.len() != self.R.len() {
return false;
}
let (logMN, M, MN) = MN(commitments.len());
if self.L.len() != logMN {
return false;
}
// Rebuild all challenges
let (mut cache, commitments) = hash_plus(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes()]);
let yinv = y.invert().unwrap();
let z = hash_to_scalar(&y.to_bytes());
cache = z;
let mut w = Vec::with_capacity(logMN);
let mut winv = Vec::with_capacity(logMN);
for (L, R) in self.L.iter().zip(&self.R) {
w.push(hash_cache(&mut cache, &[L.compress().to_bytes(), R.compress().to_bytes()]));
winv.push(cache.invert().unwrap());
}
let e = hash_cache(&mut cache, &[self.A1.compress().to_bytes(), self.B.compress().to_bytes()]);
// Convert the proof from * INV_EIGHT to its actual form
let normalize = |point: &DalekPoint| EdwardsPoint(point.mul_by_cofactor());
let L = self.L.iter().map(normalize).collect::<Vec<_>>();
let R = self.R.iter().map(normalize).collect::<Vec<_>>();
let A = normalize(&self.A);
let A1 = normalize(&self.A1);
let B = normalize(&self.B);
// Verify it
let mut proof = Vec::with_capacity(logMN + 5 + (2 * (MN + logMN)));
let mut yMN = y;
for _ in 0 .. logMN {
yMN *= yMN;
}
let yMNy = yMN * y;
let (zpow, d) = d(z, M, MN);
let zsq = zpow[0];
let esq = e * e;
let minus_esq = -esq;
let commitment_weight = minus_esq * yMNy;
for (i, commitment) in commitments.iter().map(EdwardsPoint::mul_by_cofactor).enumerate() {
proof.push((commitment_weight * zpow[i], commitment));
}
// Invert B, instead of the Scalar, as the latter is only 2x as expensive yet enables reduction
// to a single addition under vartime for the first BP verified in the batch, which is expected
// to be much more significant
proof.push((Scalar::ONE, -B));
proof.push((-e, A1));
proof.push((minus_esq, A));
proof.push((Scalar(self.d1), G));
let d_sum = zpow.sum() * Scalar::from(u64::MAX);
let y_sum = weighted_powers(y, MN).sum();
proof.push((
Scalar(self.r1 * y.0 * self.s1) + (esq * ((yMNy * z * d_sum) + ((zsq - z) * y_sum))),
H(),
));
let w_cache = challenge_products(&w, &winv);
let mut e_r1_y = e * Scalar(self.r1);
let e_s1 = e * Scalar(self.s1);
let esq_z = esq * z;
let minus_esq_z = -esq_z;
let mut minus_esq_y = minus_esq * yMN;
let generators = GENERATORS();
for i in 0 .. MN {
proof.push((e_r1_y * w_cache[i] + esq_z, generators.G[i]));
proof.push((
(e_s1 * w_cache[(!i) & (MN - 1)]) + minus_esq_z + (minus_esq_y * d[i]),
generators.H[i],
));
e_r1_y *= yinv;
minus_esq_y *= yinv;
}
for i in 0 .. logMN {
proof.push((minus_esq * w[i] * w[i], L[i]));
proof.push((minus_esq * winv[i] * winv[i], R[i]));
}
verifier.queue(rng, id, proof);
true
}
#[must_use]
pub(crate) fn verify<R: RngCore + CryptoRng>(
&self,
rng: &mut R,
commitments: &[DalekPoint],
) -> bool {
let mut verifier = BatchVerifier::new(1);
if self.verify_core(rng, &mut verifier, (), commitments) {
verifier.verify_vartime()
} else {
false
}
}
#[must_use]
pub(crate) fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
self.verify_core(rng, verifier, id, commitments)
}
}

View File

@@ -1,137 +0,0 @@
use core::ops::{Add, Sub, Mul, Index};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
use group::ff::Field;
use dalek_ff_group::{Scalar, EdwardsPoint};
use multiexp::multiexp;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct ScalarVector(pub(crate) Vec<Scalar>);
macro_rules! math_op {
($Op: ident, $op: ident, $f: expr) => {
impl $Op<Scalar> for ScalarVector {
type Output = Self;
fn $op(self, b: Scalar) -> Self {
Self(self.0.iter().map(|a| $f((a, &b))).collect())
}
}
impl $Op<Scalar> for &ScalarVector {
type Output = ScalarVector;
fn $op(self, b: Scalar) -> ScalarVector {
ScalarVector(self.0.iter().map(|a| $f((a, &b))).collect())
}
}
impl $Op<ScalarVector> for ScalarVector {
type Output = Self;
fn $op(self, b: Self) -> Self {
debug_assert_eq!(self.len(), b.len());
Self(self.0.iter().zip(b.0.iter()).map($f).collect())
}
}
impl $Op<Self> for &ScalarVector {
type Output = ScalarVector;
fn $op(self, b: Self) -> ScalarVector {
debug_assert_eq!(self.len(), b.len());
ScalarVector(self.0.iter().zip(b.0.iter()).map($f).collect())
}
}
};
}
math_op!(Add, add, |(a, b): (&Scalar, &Scalar)| *a + *b);
math_op!(Sub, sub, |(a, b): (&Scalar, &Scalar)| *a - *b);
math_op!(Mul, mul, |(a, b): (&Scalar, &Scalar)| *a * *b);
impl ScalarVector {
pub(crate) fn new(len: usize) -> Self {
Self(vec![Scalar::ZERO; len])
}
pub(crate) fn powers(x: Scalar, len: usize) -> Self {
debug_assert!(len != 0);
let mut res = Vec::with_capacity(len);
res.push(Scalar::ONE);
for i in 1 .. len {
res.push(res[i - 1] * x);
}
Self(res)
}
pub(crate) fn even_powers(x: Scalar, pow: usize) -> Self {
debug_assert!(pow != 0);
// Verify pow is a power of two
debug_assert_eq!(((pow - 1) & pow), 0);
let xsq = x * x;
let mut res = Self(Vec::with_capacity(pow / 2));
res.0.push(xsq);
let mut prev = 2;
while prev < pow {
res.0.push(res[res.len() - 1] * xsq);
prev += 2;
}
res
}
pub(crate) fn sum(mut self) -> Scalar {
self.0.drain(..).sum()
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn split(self) -> (Self, Self) {
let (l, r) = self.0.split_at(self.0.len() / 2);
(Self(l.to_vec()), Self(r.to_vec()))
}
}
impl Index<usize> for ScalarVector {
type Output = Scalar;
fn index(&self, index: usize) -> &Scalar {
&self.0[index]
}
}
pub(crate) fn inner_product(a: &ScalarVector, b: &ScalarVector) -> Scalar {
(a * b).sum()
}
pub(crate) fn weighted_powers(x: Scalar, len: usize) -> ScalarVector {
ScalarVector(ScalarVector::powers(x, len + 1).0[1 ..].to_vec())
}
pub(crate) fn weighted_inner_product(a: &ScalarVector, b: &ScalarVector, y: Scalar) -> Scalar {
// y ** 0 is not used as a power
(a * b * weighted_powers(y, a.len())).sum()
}
impl Mul<&[EdwardsPoint]> for &ScalarVector {
type Output = EdwardsPoint;
fn mul(self, b: &[EdwardsPoint]) -> EdwardsPoint {
debug_assert_eq!(self.len(), b.len());
multiexp(&self.0.iter().copied().zip(b.iter().copied()).collect::<Vec<_>>())
}
}
pub(crate) fn hadamard_fold(
l: &[EdwardsPoint],
r: &[EdwardsPoint],
a: Scalar,
b: Scalar,
) -> Vec<EdwardsPoint> {
let mut res = Vec::with_capacity(l.len() / 2);
for i in 0 .. l.len() {
res.push(multiexp(&[(a, l[i]), (b, r[i])]));
}
res
}

View File

@@ -1,325 +0,0 @@
#![allow(non_snake_case)]
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use subtle::{ConstantTimeEq, Choice, CtOption};
use rand_core::{RngCore, CryptoRng};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
traits::{IsIdentity, VartimePrecomputedMultiscalarMul},
edwards::{EdwardsPoint, VartimeEdwardsPrecomputation},
};
use crate::{
INV_EIGHT, Commitment, random_scalar, hash_to_scalar, wallet::decoys::Decoys,
ringct::hash_to_point, serialize::*,
};
#[cfg(feature = "multisig")]
mod multisig;
#[cfg(feature = "multisig")]
pub use multisig::{ClsagDetails, ClsagAddendum, ClsagMultisig};
#[cfg(feature = "multisig")]
pub(crate) use multisig::add_key_image_share;
/// Errors returned when CLSAG signing fails.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum ClsagError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("invalid ring"))]
InvalidRing,
#[cfg_attr(feature = "std", error("invalid ring member (member {0}, ring size {1})"))]
InvalidRingMember(u8, u8),
#[cfg_attr(feature = "std", error("invalid commitment"))]
InvalidCommitment,
#[cfg_attr(feature = "std", error("invalid key image"))]
InvalidImage,
#[cfg_attr(feature = "std", error("invalid D"))]
InvalidD,
#[cfg_attr(feature = "std", error("invalid s"))]
InvalidS,
#[cfg_attr(feature = "std", error("invalid c1"))]
InvalidC1,
}
/// Input being signed for.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ClsagInput {
// The actual commitment for the true spend
pub(crate) commitment: Commitment,
// True spend index, offsets, and ring
pub(crate) decoys: Decoys,
}
impl ClsagInput {
pub fn new(commitment: Commitment, decoys: Decoys) -> Result<Self, ClsagError> {
let n = decoys.len();
if n > u8::MAX.into() {
Err(ClsagError::InternalError("max ring size in this library is u8 max"))?;
}
let n = u8::try_from(n).unwrap();
if decoys.i >= n {
Err(ClsagError::InvalidRingMember(decoys.i, n))?;
}
// Validate the commitment matches
if decoys.ring[usize::from(decoys.i)][1] != commitment.calculate() {
Err(ClsagError::InvalidCommitment)?;
}
Ok(Self { commitment, decoys })
}
}
#[allow(clippy::large_enum_variant)]
enum Mode {
Sign(usize, EdwardsPoint, EdwardsPoint),
Verify(Scalar),
}
// Core of the CLSAG algorithm, applicable to both sign and verify with minimal differences
// Said differences are covered via the above Mode
fn core(
ring: &[[EdwardsPoint; 2]],
I: &EdwardsPoint,
pseudo_out: &EdwardsPoint,
msg: &[u8; 32],
D: &EdwardsPoint,
s: &[Scalar],
A_c1: Mode,
) -> ((EdwardsPoint, Scalar, Scalar), Scalar) {
let n = ring.len();
let images_precomp = VartimeEdwardsPrecomputation::new([I, D]);
let D = D * INV_EIGHT();
// Generate the transcript
// Instead of generating multiple, a single transcript is created and then edited as needed
const PREFIX: &[u8] = b"CLSAG_";
#[rustfmt::skip]
const AGG_0: &[u8] = b"agg_0";
#[rustfmt::skip]
const ROUND: &[u8] = b"round";
const PREFIX_AGG_0_LEN: usize = PREFIX.len() + AGG_0.len();
let mut to_hash = Vec::with_capacity(((2 * n) + 5) * 32);
to_hash.extend(PREFIX);
to_hash.extend(AGG_0);
to_hash.extend([0; 32 - PREFIX_AGG_0_LEN]);
let mut P = Vec::with_capacity(n);
for member in ring {
P.push(member[0]);
to_hash.extend(member[0].compress().to_bytes());
}
let mut C = Vec::with_capacity(n);
for member in ring {
C.push(member[1] - pseudo_out);
to_hash.extend(member[1].compress().to_bytes());
}
to_hash.extend(I.compress().to_bytes());
to_hash.extend(D.compress().to_bytes());
to_hash.extend(pseudo_out.compress().to_bytes());
// mu_P with agg_0
let mu_P = hash_to_scalar(&to_hash);
// mu_C with agg_1
to_hash[PREFIX_AGG_0_LEN - 1] = b'1';
let mu_C = hash_to_scalar(&to_hash);
// Truncate it for the round transcript, altering the DST as needed
to_hash.truncate(((2 * n) + 1) * 32);
for i in 0 .. ROUND.len() {
to_hash[PREFIX.len() + i] = ROUND[i];
}
// Unfortunately, it's I D pseudo_out instead of pseudo_out I D, meaning this needs to be
// truncated just to add it back
to_hash.extend(pseudo_out.compress().to_bytes());
to_hash.extend(msg);
// Configure the loop based on if we're signing or verifying
let start;
let end;
let mut c;
match A_c1 {
Mode::Sign(r, A, AH) => {
start = r + 1;
end = r + n;
to_hash.extend(A.compress().to_bytes());
to_hash.extend(AH.compress().to_bytes());
c = hash_to_scalar(&to_hash);
}
Mode::Verify(c1) => {
start = 0;
end = n;
c = c1;
}
}
// Perform the core loop
let mut c1 = CtOption::new(Scalar::zero(), Choice::from(0));
for i in (start .. end).map(|i| i % n) {
// This will only execute once and shouldn't need to be constant time. Making it constant time
// removes the risk of branch prediction creating timing differences depending on ring index
// however
c1 = c1.or_else(|| CtOption::new(c, i.ct_eq(&0)));
let c_p = mu_P * c;
let c_c = mu_C * c;
let L = (&s[i] * &ED25519_BASEPOINT_TABLE) + (c_p * P[i]) + (c_c * C[i]);
let PH = hash_to_point(P[i]);
// Shouldn't be an issue as all of the variables in this vartime statement are public
let R = (s[i] * PH) + images_precomp.vartime_multiscalar_mul([c_p, c_c]);
to_hash.truncate(((2 * n) + 3) * 32);
to_hash.extend(L.compress().to_bytes());
to_hash.extend(R.compress().to_bytes());
c = hash_to_scalar(&to_hash);
}
// This first tuple is needed to continue signing, the latter is the c to be tested/worked with
((D, c * mu_P, c * mu_C), c1.unwrap_or(c))
}
/// CLSAG signature, as used in Monero.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Clsag {
pub D: EdwardsPoint,
pub s: Vec<Scalar>,
pub c1: Scalar,
}
impl Clsag {
// Sign core is the extension of core as needed for signing, yet is shared between single signer
// and multisig, hence why it's still core
#[allow(clippy::many_single_char_names)]
pub(crate) fn sign_core<R: RngCore + CryptoRng>(
rng: &mut R,
I: &EdwardsPoint,
input: &ClsagInput,
mask: Scalar,
msg: &[u8; 32],
A: EdwardsPoint,
AH: EdwardsPoint,
) -> (Self, EdwardsPoint, Scalar, Scalar) {
let r: usize = input.decoys.i.into();
let pseudo_out = Commitment::new(mask, input.commitment.amount).calculate();
let z = input.commitment.mask - mask;
let H = hash_to_point(input.decoys.ring[r][0]);
let D = H * z;
let mut s = Vec::with_capacity(input.decoys.ring.len());
for _ in 0 .. input.decoys.ring.len() {
s.push(random_scalar(rng));
}
let ((D, p, c), c1) =
core(&input.decoys.ring, I, &pseudo_out, msg, &D, &s, Mode::Sign(r, A, AH));
(Self { D, s, c1 }, pseudo_out, p, c * z)
}
/// Generate CLSAG signatures for the given inputs.
/// inputs is of the form (private key, key image, input).
/// sum_outputs is for the sum of the outputs' commitment masks.
pub fn sign<R: RngCore + CryptoRng>(
rng: &mut R,
mut inputs: Vec<(Zeroizing<Scalar>, EdwardsPoint, ClsagInput)>,
sum_outputs: Scalar,
msg: [u8; 32],
) -> Vec<(Self, EdwardsPoint)> {
let mut res = Vec::with_capacity(inputs.len());
let mut sum_pseudo_outs = Scalar::zero();
for i in 0 .. inputs.len() {
let mask = if i == (inputs.len() - 1) {
sum_outputs - sum_pseudo_outs
} else {
let mask = random_scalar(rng);
sum_pseudo_outs += mask;
mask
};
let mut nonce = Zeroizing::new(random_scalar(rng));
let (mut clsag, pseudo_out, p, c) = Self::sign_core(
rng,
&inputs[i].1,
&inputs[i].2,
mask,
&msg,
nonce.deref() * &ED25519_BASEPOINT_TABLE,
nonce.deref() *
hash_to_point(inputs[i].2.decoys.ring[usize::from(inputs[i].2.decoys.i)][0]),
);
clsag.s[usize::from(inputs[i].2.decoys.i)] =
(-((p * inputs[i].0.deref()) + c)) + nonce.deref();
inputs[i].0.zeroize();
nonce.zeroize();
debug_assert!(clsag
.verify(&inputs[i].2.decoys.ring, &inputs[i].1, &pseudo_out, &msg)
.is_ok());
res.push((clsag, pseudo_out));
}
res
}
/// Verify the CLSAG signature against the given Transaction data.
pub fn verify(
&self,
ring: &[[EdwardsPoint; 2]],
I: &EdwardsPoint,
pseudo_out: &EdwardsPoint,
msg: &[u8; 32],
) -> Result<(), ClsagError> {
// Preliminary checks. s, c1, and points must also be encoded canonically, which isn't checked
// here
if ring.is_empty() {
Err(ClsagError::InvalidRing)?;
}
if ring.len() != self.s.len() {
Err(ClsagError::InvalidS)?;
}
if I.is_identity() {
Err(ClsagError::InvalidImage)?;
}
let D = self.D.mul_by_cofactor();
if D.is_identity() {
Err(ClsagError::InvalidD)?;
}
let (_, c1) = core(ring, I, pseudo_out, msg, &D, &self.s, Mode::Verify(self.c1));
if c1 != self.c1 {
Err(ClsagError::InvalidC1)?;
}
Ok(())
}
pub(crate) fn fee_weight(ring_len: usize) -> usize {
(ring_len * 32) + 32 + 32
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_raw_vec(write_scalar, &self.s, w)?;
w.write_all(&self.c1.to_bytes())?;
write_point(&self.D, w)
}
pub fn read<R: Read>(decoys: usize, r: &mut R) -> io::Result<Self> {
Ok(Self { s: read_raw_vec(read_scalar, decoys, r)?, c1: read_scalar(r)?, D: read_point(r)? })
}
}

View File

@@ -1,311 +0,0 @@
use core::{ops::Deref, fmt::Debug};
use std_shims::{
sync::Arc,
io::{self, Read, Write},
};
use std::sync::RwLock;
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use curve25519_dalek::{
traits::{Identity, IsIdentity},
scalar::Scalar,
edwards::EdwardsPoint,
};
use group::{ff::Field, Group, GroupEncoding};
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group as dfg;
use dleq::DLEqProof;
use frost::{
dkg::lagrange,
curve::Ed25519,
Participant, FrostError, ThresholdKeys, ThresholdView,
algorithm::{WriteAddendum, Algorithm},
};
use crate::ringct::{
hash_to_point,
clsag::{ClsagInput, Clsag},
};
fn dleq_transcript() -> RecommendedTranscript {
RecommendedTranscript::new(b"monero_key_image_dleq")
}
impl ClsagInput {
fn transcript<T: Transcript>(&self, transcript: &mut T) {
// Doesn't domain separate as this is considered part of the larger CLSAG proof
// Ring index
transcript.append_message(b"real_spend", [self.decoys.i]);
// Ring
for (i, pair) in self.decoys.ring.iter().enumerate() {
// Doesn't include global output indexes as CLSAG doesn't care and won't be affected by it
// They're just a unreliable reference to this data which will be included in the message
// if in use
transcript.append_message(b"member", [u8::try_from(i).expect("ring size exceeded 255")]);
transcript.append_message(b"key", pair[0].compress().to_bytes());
transcript.append_message(b"commitment", pair[1].compress().to_bytes());
}
// Doesn't include the commitment's parts as the above ring + index includes the commitment
// The only potential malleability would be if the G/H relationship is known breaking the
// discrete log problem, which breaks everything already
}
}
/// CLSAG input and the mask to use for it.
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ClsagDetails {
input: ClsagInput,
mask: Scalar,
}
impl ClsagDetails {
pub fn new(input: ClsagInput, mask: Scalar) -> Self {
Self { input, mask }
}
}
/// Addendum produced during the FROST signing process with relevant data.
#[derive(Clone, PartialEq, Eq, Zeroize, Debug)]
pub struct ClsagAddendum {
pub(crate) key_image: dfg::EdwardsPoint,
dleq: DLEqProof<dfg::EdwardsPoint>,
}
impl WriteAddendum for ClsagAddendum {
fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(self.key_image.compress().to_bytes().as_ref())?;
self.dleq.write(writer)
}
}
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug)]
struct Interim {
p: Scalar,
c: Scalar,
clsag: Clsag,
pseudo_out: EdwardsPoint,
}
/// FROST algorithm for producing a CLSAG signature.
#[allow(non_snake_case)]
#[derive(Clone, Debug)]
pub struct ClsagMultisig {
transcript: RecommendedTranscript,
pub(crate) H: EdwardsPoint,
// Merged here as CLSAG needs it, passing it would be a mess, yet having it beforehand requires
// an extra round
image: EdwardsPoint,
details: Arc<RwLock<Option<ClsagDetails>>>,
msg: Option<[u8; 32]>,
interim: Option<Interim>,
}
impl ClsagMultisig {
pub fn new(
transcript: RecommendedTranscript,
output_key: EdwardsPoint,
details: Arc<RwLock<Option<ClsagDetails>>>,
) -> Self {
Self {
transcript,
H: hash_to_point(output_key),
image: EdwardsPoint::identity(),
details,
msg: None,
interim: None,
}
}
fn input(&self) -> ClsagInput {
(*self.details.read().unwrap()).as_ref().unwrap().input.clone()
}
fn mask(&self) -> Scalar {
(*self.details.read().unwrap()).as_ref().unwrap().mask
}
}
pub(crate) fn add_key_image_share(
image: &mut EdwardsPoint,
generator: EdwardsPoint,
offset: Scalar,
included: &[Participant],
participant: Participant,
share: EdwardsPoint,
) {
if image.is_identity() {
*image = generator * offset;
}
*image += share * lagrange::<dfg::Scalar>(participant, included).0;
}
impl Algorithm<Ed25519> for ClsagMultisig {
type Transcript = RecommendedTranscript;
type Addendum = ClsagAddendum;
type Signature = (Clsag, EdwardsPoint);
fn nonces(&self) -> Vec<Vec<dfg::EdwardsPoint>> {
vec![vec![dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)]]
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Ed25519>,
) -> ClsagAddendum {
ClsagAddendum {
key_image: dfg::EdwardsPoint(self.H) * keys.secret_share().deref(),
dleq: DLEqProof::prove(
rng,
// Doesn't take in a larger transcript object due to the usage of this
// Every prover would immediately write their own DLEq proof, when they can only do so in
// the proper order if they want to reach consensus
// It'd be a poor API to have CLSAG define a new transcript solely to pass here, just to
// try to merge later in some form, when it should instead just merge xH (as it does)
&mut dleq_transcript(),
&[dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)],
keys.secret_share(),
),
}
}
fn read_addendum<R: Read>(&self, reader: &mut R) -> io::Result<ClsagAddendum> {
let mut bytes = [0; 32];
reader.read_exact(&mut bytes)?;
// dfg ensures the point is torsion free
let xH = Option::<dfg::EdwardsPoint>::from(dfg::EdwardsPoint::from_bytes(&bytes))
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid key image"))?;
// Ensure this is a canonical point
if xH.to_bytes() != bytes {
Err(io::Error::new(io::ErrorKind::Other, "non-canonical key image"))?;
}
Ok(ClsagAddendum { key_image: xH, dleq: DLEqProof::<dfg::EdwardsPoint>::read(reader)? })
}
fn process_addendum(
&mut self,
view: &ThresholdView<Ed25519>,
l: Participant,
addendum: ClsagAddendum,
) -> Result<(), FrostError> {
if self.image.is_identity() {
self.transcript.domain_separate(b"CLSAG");
self.input().transcript(&mut self.transcript);
self.transcript.append_message(b"mask", self.mask().to_bytes());
}
self.transcript.append_message(b"participant", l.to_bytes());
addendum
.dleq
.verify(
&mut dleq_transcript(),
&[dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)],
&[view.original_verification_share(l), addendum.key_image],
)
.map_err(|_| FrostError::InvalidPreprocess(l))?;
self.transcript.append_message(b"key_image_share", addendum.key_image.compress().to_bytes());
add_key_image_share(
&mut self.image,
self.H,
view.offset().0,
view.included(),
l,
addendum.key_image.0,
);
Ok(())
}
fn transcript(&mut self) -> &mut Self::Transcript {
&mut self.transcript
}
fn sign_share(
&mut self,
view: &ThresholdView<Ed25519>,
nonce_sums: &[Vec<dfg::EdwardsPoint>],
nonces: Vec<Zeroizing<dfg::Scalar>>,
msg: &[u8],
) -> dfg::Scalar {
// Use the transcript to get a seeded random number generator
// The transcript contains private data, preventing passive adversaries from recreating this
// process even if they have access to commitments (specifically, the ring index being signed
// for, along with the mask which should not only require knowing the shared keys yet also the
// input commitment masks)
let mut rng = ChaCha20Rng::from_seed(self.transcript.rng_seed(b"decoy_responses"));
self.msg = Some(msg.try_into().expect("CLSAG message should be 32-bytes"));
#[allow(non_snake_case)]
let (clsag, pseudo_out, p, c) = Clsag::sign_core(
&mut rng,
&self.image,
&self.input(),
self.mask(),
self.msg.as_ref().unwrap(),
nonce_sums[0][0].0,
nonce_sums[0][1].0,
);
self.interim = Some(Interim { p, c, clsag, pseudo_out });
(-(dfg::Scalar(p) * view.secret_share().deref())) + nonces[0].deref()
}
#[must_use]
fn verify(
&self,
_: dfg::EdwardsPoint,
_: &[Vec<dfg::EdwardsPoint>],
sum: dfg::Scalar,
) -> Option<Self::Signature> {
let interim = self.interim.as_ref().unwrap();
let mut clsag = interim.clsag.clone();
clsag.s[usize::from(self.input().decoys.i)] = sum.0 - interim.c;
if clsag
.verify(
&self.input().decoys.ring,
&self.image,
&interim.pseudo_out,
self.msg.as_ref().unwrap(),
)
.is_ok()
{
return Some((clsag, interim.pseudo_out));
}
None
}
fn verify_share(
&self,
verification_share: dfg::EdwardsPoint,
nonces: &[Vec<dfg::EdwardsPoint>],
share: dfg::Scalar,
) -> Result<Vec<(dfg::Scalar, dfg::EdwardsPoint)>, ()> {
let interim = self.interim.as_ref().unwrap();
Ok(vec![
(share, dfg::EdwardsPoint::generator()),
(dfg::Scalar(interim.p), verification_share),
(-dfg::Scalar::ONE, nonces[0][0]),
])
}
}

View File

@@ -1,8 +0,0 @@
use curve25519_dalek::edwards::EdwardsPoint;
pub use monero_generators::{hash_to_point as raw_hash_to_point};
/// Monero's hash to point function, as named `ge_fromfe_frombytes_vartime`.
pub fn hash_to_point(key: EdwardsPoint) -> EdwardsPoint {
raw_hash_to_point(key.compress().to_bytes())
}

View File

@@ -1,72 +0,0 @@
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use curve25519_dalek::scalar::Scalar;
#[cfg(feature = "experimental")]
use curve25519_dalek::edwards::EdwardsPoint;
use crate::serialize::*;
#[cfg(feature = "experimental")]
use crate::{hash_to_scalar, ringct::hash_to_point};
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Mlsag {
pub ss: Vec<[Scalar; 2]>,
pub cc: Scalar,
}
impl Mlsag {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for ss in &self.ss {
write_raw_vec(write_scalar, ss, w)?;
}
write_scalar(&self.cc, w)
}
pub fn read<R: Read>(mixins: usize, r: &mut R) -> io::Result<Self> {
Ok(Self {
ss: (0 .. mixins).map(|_| read_array(read_scalar, r)).collect::<Result<_, _>>()?,
cc: read_scalar(r)?,
})
}
#[cfg(feature = "experimental")]
#[must_use]
pub fn verify(
&self,
msg: &[u8; 32],
ring: &[[EdwardsPoint; 2]],
key_image: &EdwardsPoint,
) -> bool {
if ring.is_empty() {
return false;
}
let mut buf = Vec::with_capacity(6 * 32);
let mut ci = self.cc;
for (i, ring_member) in ring.iter().enumerate() {
buf.extend_from_slice(msg);
#[allow(non_snake_case)]
let L =
|r| EdwardsPoint::vartime_double_scalar_mul_basepoint(&ci, &ring_member[r], &self.ss[i][r]);
buf.extend_from_slice(ring_member[0].compress().as_bytes());
buf.extend_from_slice(L(0).compress().as_bytes());
#[allow(non_snake_case)]
let R = (self.ss[i][0] * hash_to_point(ring_member[0])) + (ci * key_image);
buf.extend_from_slice(R.compress().as_bytes());
buf.extend_from_slice(ring_member[1].compress().as_bytes());
buf.extend_from_slice(L(1).compress().as_bytes());
ci = hash_to_scalar(&buf);
buf.clear();
}
ci == self.cc
}
}

View File

@@ -1,383 +0,0 @@
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub(crate) mod hash_to_point;
pub use hash_to_point::{raw_hash_to_point, hash_to_point};
/// MLSAG struct, along with verifying functionality.
pub mod mlsag;
/// CLSAG struct, along with signing and verifying functionality.
pub mod clsag;
/// BorromeanRange struct, along with verifying functionality.
pub mod borromean;
/// Bulletproofs(+) structs, along with proving and verifying functionality.
pub mod bulletproofs;
use crate::{
Protocol,
serialize::*,
ringct::{mlsag::Mlsag, clsag::Clsag, borromean::BorromeanRange, bulletproofs::Bulletproofs},
};
/// Generate a key image for a given key. Defined as `x * hash_to_point(xG)`.
pub fn generate_key_image(secret: &Zeroizing<Scalar>) -> EdwardsPoint {
hash_to_point(&ED25519_BASEPOINT_TABLE * secret.deref()) * secret.deref()
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum EncryptedAmount {
Original { mask: [u8; 32], amount: [u8; 32] },
Compact { amount: [u8; 8] },
}
impl EncryptedAmount {
pub fn read<R: Read>(compact: bool, r: &mut R) -> io::Result<Self> {
Ok(if compact {
Self::Compact { amount: read_bytes(r)? }
} else {
Self::Original { mask: read_bytes(r)?, amount: read_bytes(r)? }
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::Original { mask, amount } => {
w.write_all(mask)?;
w.write_all(amount)
}
Self::Compact { amount } => w.write_all(amount),
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum RctType {
/// No RCT proofs.
Null,
/// One MLSAG for a single input and a Borromean range proof (RCTTypeFull).
MlsagAggregate,
// One MLSAG for each input and a Borromean range proof (RCTTypeSimple).
MlsagIndividual,
// One MLSAG for each input and a Bulletproof (RCTTypeBulletproof).
Bulletproofs,
/// One MLSAG for each input and a Bulletproof, yet starting to use EncryptedAmount::Compact
/// (RCTTypeBulletproof2).
BulletproofsCompactAmount,
/// One CLSAG for each input and a Bulletproof (RCTTypeCLSAG).
Clsag,
/// One CLSAG for each input and a Bulletproof+ (RCTTypeBulletproofPlus).
BulletproofsPlus,
}
impl RctType {
pub fn to_byte(self) -> u8 {
match self {
Self::Null => 0,
Self::MlsagAggregate => 1,
Self::MlsagIndividual => 2,
Self::Bulletproofs => 3,
Self::BulletproofsCompactAmount => 4,
Self::Clsag => 5,
Self::BulletproofsPlus => 6,
}
}
pub fn from_byte(byte: u8) -> Option<Self> {
Some(match byte {
0 => Self::Null,
1 => Self::MlsagAggregate,
2 => Self::MlsagIndividual,
3 => Self::Bulletproofs,
4 => Self::BulletproofsCompactAmount,
5 => Self::Clsag,
6 => Self::BulletproofsPlus,
_ => None?,
})
}
pub fn compact_encrypted_amounts(&self) -> bool {
match self {
Self::Null | Self::MlsagAggregate | Self::MlsagIndividual | Self::Bulletproofs => false,
Self::BulletproofsCompactAmount | Self::Clsag | Self::BulletproofsPlus => true,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctBase {
pub fee: u64,
pub pseudo_outs: Vec<EdwardsPoint>,
pub encrypted_amounts: Vec<EncryptedAmount>,
pub commitments: Vec<EdwardsPoint>,
}
impl RctBase {
pub(crate) fn fee_weight(outputs: usize) -> usize {
1 + 8 + (outputs * (8 + 32))
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
w.write_all(&[rct_type.to_byte()])?;
match rct_type {
RctType::Null => Ok(()),
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag |
RctType::BulletproofsPlus => {
write_varint(&self.fee, w)?;
if rct_type == RctType::MlsagIndividual {
write_raw_vec(write_point, &self.pseudo_outs, w)?;
}
for encrypted_amount in &self.encrypted_amounts {
encrypted_amount.write(w)?;
}
write_raw_vec(write_point, &self.commitments, w)
}
}
}
pub fn read<R: Read>(inputs: usize, outputs: usize, r: &mut R) -> io::Result<(Self, RctType)> {
let rct_type = RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid RCT type"))?;
match rct_type {
RctType::Null | RctType::MlsagAggregate | RctType::MlsagIndividual => {}
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag |
RctType::BulletproofsPlus => {
if outputs == 0 {
// Because the Bulletproofs(+) layout must be canonical, there must be 1 Bulletproof if
// Bulletproofs are in use
// If there are Bulletproofs, there must be a matching amount of outputs, implicitly
// banning 0 outputs
// Since HF 12 (CLSAG being 13), a 2-output minimum has also been enforced
Err(io::Error::new(io::ErrorKind::Other, "RCT with Bulletproofs(+) had 0 outputs"))?;
}
}
}
Ok((
if rct_type == RctType::Null {
Self { fee: 0, pseudo_outs: vec![], encrypted_amounts: vec![], commitments: vec![] }
} else {
Self {
fee: read_varint(r)?,
pseudo_outs: if rct_type == RctType::MlsagIndividual {
read_raw_vec(read_point, inputs, r)?
} else {
vec![]
},
encrypted_amounts: (0 .. outputs)
.map(|_| EncryptedAmount::read(rct_type.compact_encrypted_amounts(), r))
.collect::<Result<_, _>>()?,
commitments: read_raw_vec(read_point, outputs, r)?,
}
},
rct_type,
))
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum RctPrunable {
Null,
MlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsags: Vec<Mlsag>,
},
MlsagBulletproofs {
bulletproofs: Bulletproofs,
mlsags: Vec<Mlsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
Clsag {
bulletproofs: Bulletproofs,
clsags: Vec<Clsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
}
impl RctPrunable {
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
1 + Bulletproofs::fee_weight(protocol.bp_plus(), outputs) +
(inputs * (Clsag::fee_weight(protocol.ring_len()) + 32))
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
match self {
Self::Null => Ok(()),
Self::MlsagBorromean { borromean, mlsags } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
write_raw_vec(Mlsag::write, mlsags, w)
}
Self::MlsagBulletproofs { bulletproofs, mlsags, pseudo_outs } => {
if rct_type == RctType::Bulletproofs {
w.write_all(&1u32.to_le_bytes())?;
} else {
w.write_all(&[1])?;
}
bulletproofs.write(w)?;
write_raw_vec(Mlsag::write, mlsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
Self::Clsag { bulletproofs, clsags, pseudo_outs } => {
w.write_all(&[1])?;
bulletproofs.write(w)?;
write_raw_vec(Clsag::write, clsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
}
}
pub fn serialize(&self, rct_type: RctType) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized, rct_type).unwrap();
serialized
}
pub fn read<R: Read>(
rct_type: RctType,
decoys: &[usize],
outputs: usize,
r: &mut R,
) -> io::Result<Self> {
Ok(match rct_type {
RctType::Null => Self::Null,
RctType::MlsagAggregate | RctType::MlsagIndividual => Self::MlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsags: decoys.iter().map(|d| Mlsag::read(*d, r)).collect::<Result<_, _>>()?,
},
RctType::Bulletproofs | RctType::BulletproofsCompactAmount => Self::MlsagBulletproofs {
bulletproofs: {
if (if rct_type == RctType::Bulletproofs {
u64::from(read_u32(r)?)
} else {
read_varint(r)?
}) != 1
{
Err(io::Error::new(io::ErrorKind::Other, "n bulletproofs instead of one"))?;
}
Bulletproofs::read(r)?
},
mlsags: decoys.iter().map(|d| Mlsag::read(*d, r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, decoys.len(), r)?,
},
RctType::Clsag | RctType::BulletproofsPlus => Self::Clsag {
bulletproofs: {
if read_varint(r)? != 1 {
Err(io::Error::new(io::ErrorKind::Other, "n bulletproofs instead of one"))?;
}
(if rct_type == RctType::Clsag { Bulletproofs::read } else { Bulletproofs::read_plus })(
r,
)?
},
clsags: (0 .. decoys.len()).map(|o| Clsag::read(decoys[o], r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, decoys.len(), r)?,
},
})
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::Null => panic!("Serializing RctPrunable::Null for a signature"),
Self::MlsagBorromean { borromean, .. } => borromean.iter().try_for_each(|rs| rs.write(w)),
Self::MlsagBulletproofs { bulletproofs, .. } => bulletproofs.signature_write(w),
Self::Clsag { bulletproofs, .. } => bulletproofs.signature_write(w),
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctSignatures {
pub base: RctBase,
pub prunable: RctPrunable,
}
impl RctSignatures {
/// RctType for a given RctSignatures struct.
pub fn rct_type(&self) -> RctType {
match &self.prunable {
RctPrunable::Null => RctType::Null,
RctPrunable::MlsagBorromean { .. } => {
/*
This type of RctPrunable may have no outputs, yet pseudo_outs are per input
This will only be a valid RctSignatures if it's for a TX with inputs
That makes this valid for any valid RctSignatures
While it will be invalid for any invalid RctSignatures, potentially letting an invalid
MlsagAggregate be interpreted as a valid MlsagIndividual (or vice versa), they have
incompatible deserializations
This means it's impossible to receive a MlsagAggregate over the wire and interpret it
as a MlsagIndividual (or vice versa)
That only makes manual manipulation unsafe, which will always be true since these fields
are all pub
TODO: Consider making them private with read-only accessors?
*/
if self.base.pseudo_outs.is_empty() {
RctType::MlsagAggregate
} else {
RctType::MlsagIndividual
}
}
// RctBase ensures there's at least one output, making the following
// inferences guaranteed/expects impossible on any valid RctSignatures
RctPrunable::MlsagBulletproofs { .. } => {
if matches!(
self
.base
.encrypted_amounts
.get(0)
.expect("MLSAG with Bulletproofs didn't have any outputs"),
EncryptedAmount::Original { .. }
) {
RctType::Bulletproofs
} else {
RctType::BulletproofsCompactAmount
}
}
RctPrunable::Clsag { bulletproofs, .. } => {
if matches!(bulletproofs, Bulletproofs::Original { .. }) {
RctType::Clsag
} else {
RctType::BulletproofsPlus
}
}
}
}
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
RctBase::fee_weight(outputs) + RctPrunable::fee_weight(protocol, inputs, outputs)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
let rct_type = self.rct_type();
self.base.write(w, rct_type)?;
self.prunable.write(w, rct_type)
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(decoys: Vec<usize>, outputs: usize, r: &mut R) -> io::Result<Self> {
let base = RctBase::read(decoys.len(), outputs, r)?;
Ok(Self { base: base.0, prunable: RctPrunable::read(base.1, &decoys, outputs, r)? })
}
}

View File

@@ -1,91 +0,0 @@
use async_trait::async_trait;
use digest_auth::AuthContext;
use reqwest::Client;
use crate::rpc::{RpcError, RpcConnection, Rpc};
#[derive(Clone, Debug)]
pub struct HttpRpc {
client: Client,
userpass: Option<(String, String)>,
url: String,
}
impl HttpRpc {
/// Create a new HTTP(S) RPC connection.
///
/// A daemon requiring authentication can be used via including the username and password in the
/// URL.
pub fn new(mut url: String) -> Result<Rpc<Self>, RpcError> {
// Parse out the username and password
let userpass = if url.contains('@') {
let url_clone = url;
let split_url = url_clone.split('@').collect::<Vec<_>>();
if split_url.len() != 2 {
Err(RpcError::InvalidNode)?;
}
let mut userpass = split_url[0];
url = split_url[1].to_string();
// If there was additionally a protocol string, restore that to the daemon URL
if userpass.contains("://") {
let split_userpass = userpass.split("://").collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
url = split_userpass[0].to_string() + "://" + &url;
userpass = split_userpass[1];
}
let split_userpass = userpass.split(':').collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
Some((split_userpass[0].to_string(), split_userpass[1].to_string()))
} else {
None
};
Ok(Rpc(Self { client: Client::new(), userpass, url }))
}
}
#[async_trait]
impl RpcConnection for HttpRpc {
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError> {
let mut builder = self.client.post(self.url.clone() + "/" + route).body(body);
if let Some((user, pass)) = &self.userpass {
let req = self.client.post(&self.url).send().await.map_err(|_| RpcError::InvalidNode)?;
// Only provide authentication if this daemon actually expects it
if let Some(header) = req.headers().get("www-authenticate") {
builder = builder.header(
"Authorization",
digest_auth::parse(header.to_str().map_err(|_| RpcError::InvalidNode)?)
.map_err(|_| RpcError::InvalidNode)?
.respond(&AuthContext::new_post::<_, _, _, &[u8]>(
user,
pass,
"/".to_string() + route,
None,
))
.map_err(|_| RpcError::InvalidNode)?
.to_header_string(),
);
}
}
Ok(
builder
.send()
.await
.map_err(|_| RpcError::ConnectionError)?
.bytes()
.await
.map_err(|_| RpcError::ConnectionError)?
.slice(..)
.to_vec(),
)
}
}

View File

@@ -1,617 +0,0 @@
use core::fmt::Debug;
#[cfg(not(feature = "std"))]
use alloc::boxed::Box;
use std_shims::{
vec::Vec,
io,
string::{String, ToString},
};
use async_trait::async_trait;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use serde::{Serialize, Deserialize, de::DeserializeOwned};
use serde_json::{Value, json};
use crate::{
Protocol,
serialize::*,
transaction::{Input, Timelock, Transaction},
block::Block,
wallet::Fee,
};
#[cfg(feature = "http_rpc")]
mod http;
#[cfg(feature = "http_rpc")]
pub use http::*;
#[derive(Deserialize, Debug)]
pub struct EmptyResponse;
#[derive(Deserialize, Debug)]
pub struct JsonRpcResponse<T> {
result: T,
}
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
pruned_as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum RpcError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("connection error"))]
ConnectionError,
#[cfg_attr(feature = "std", error("invalid node"))]
InvalidNode,
#[cfg_attr(feature = "std", error("unsupported protocol version ({0})"))]
UnsupportedProtocol(usize),
#[cfg_attr(feature = "std", error("transactions not found"))]
TransactionsNotFound(Vec<[u8; 32]>),
#[cfg_attr(feature = "std", error("invalid point ({0})"))]
InvalidPoint(String),
#[cfg_attr(feature = "std", error("pruned transaction"))]
PrunedTransaction,
#[cfg_attr(feature = "std", error("invalid transaction ({0:?})"))]
InvalidTransaction([u8; 32]),
}
fn rpc_hex(value: &str) -> Result<Vec<u8>, RpcError> {
hex::decode(value).map_err(|_| RpcError::InvalidNode)
}
fn hash_hex(hash: &str) -> Result<[u8; 32], RpcError> {
rpc_hex(hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
fn rpc_point(point: &str) -> Result<EdwardsPoint, RpcError> {
CompressedEdwardsY(
rpc_hex(point)?.try_into().map_err(|_| RpcError::InvalidPoint(point.to_string()))?,
)
.decompress()
.ok_or_else(|| RpcError::InvalidPoint(point.to_string()))
}
// Read an EPEE VarInt, distinct from the VarInts used throughout the rest of the protocol
fn read_epee_vi<R: io::Read>(reader: &mut R) -> io::Result<u64> {
let vi_start = read_byte(reader)?;
let len = match vi_start & 0b11 {
0 => 1,
1 => 2,
2 => 4,
3 => 8,
_ => unreachable!(),
};
let mut vi = u64::from(vi_start >> 2);
for i in 1 .. len {
vi |= u64::from(read_byte(reader)?) << (((i - 1) * 8) + 6);
}
Ok(vi)
}
#[async_trait]
pub trait RpcConnection: Send + Sync + Clone + Debug {
/// Perform a POST request to the specified route with the specified body.
///
/// The implementor is left to handle anything such as authentication.
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError>;
}
// TODO: Make this provided methods for RpcConnection?
#[derive(Clone, Debug)]
pub struct Rpc<R: RpcConnection>(R);
impl<R: RpcConnection> Rpc<R> {
/// Perform a RPC call to the specified route with the provided parameters.
///
/// This is NOT a JSON-RPC call. They use a route of "json_rpc" and are available via
/// `json_rpc_call`.
pub async fn rpc_call<Params: Send + Serialize + Debug, Response: DeserializeOwned + Debug>(
&self,
route: &str,
params: Option<Params>,
) -> Result<Response, RpcError> {
serde_json::from_str(
std_shims::str::from_utf8(
&self
.0
.post(
route,
if let Some(params) = params {
serde_json::to_string(&params).unwrap().into_bytes()
} else {
vec![]
},
)
.await?,
)
.map_err(|_| RpcError::InvalidNode)?,
)
.map_err(|_| RpcError::InvalidNode)
}
/// Perform a JSON-RPC call with the specified method with the provided parameters
pub async fn json_rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Value>,
) -> Result<Response, RpcError> {
let mut req = json!({ "method": method });
if let Some(params) = params {
req.as_object_mut().unwrap().insert("params".into(), params);
}
Ok(self.rpc_call::<_, JsonRpcResponse<Response>>("json_rpc", Some(req)).await?.result)
}
/// Perform a binary call to the specified route with the provided parameters.
pub async fn bin_call(&self, route: &str, params: Vec<u8>) -> Result<Vec<u8>, RpcError> {
self.0.post(route, params).await
}
/// Get the active blockchain protocol version.
pub async fn get_protocol(&self) -> Result<Protocol, RpcError> {
#[derive(Deserialize, Debug)]
struct ProtocolResponse {
major_version: usize,
}
#[derive(Deserialize, Debug)]
struct LastHeaderResponse {
block_header: ProtocolResponse,
}
Ok(
match self
.json_rpc_call::<LastHeaderResponse>("get_last_block_header", None)
.await?
.block_header
.major_version
{
13 | 14 => Protocol::v14,
15 | 16 => Protocol::v16,
protocol => Err(RpcError::UnsupportedProtocol(protocol))?,
},
)
}
pub async fn get_height(&self) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct HeightResponse {
height: usize,
}
Ok(self.rpc_call::<Option<()>, HeightResponse>("get_height", None).await?.height)
}
pub async fn get_transactions(&self, hashes: &[[u8; 32]]) -> Result<Vec<Transaction>, RpcError> {
if hashes.is_empty() {
return Ok(vec![]);
}
let mut hashes_hex = hashes.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = Vec::with_capacity(hashes.len());
while !hashes_hex.is_empty() {
// Monero errors if more than 100 is requested unless using a non-restricted RPC
const TXS_PER_REQUEST: usize = 100;
let this_count = TXS_PER_REQUEST.min(hashes_hex.len());
let txs: TransactionsResponse = self
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. this_count).collect::<Vec<_>>(),
})),
)
.await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
all_txs.extend(txs.txs);
}
all_txs
.iter()
.enumerate()
.map(|(i, res)| {
let tx = Transaction::read::<&[u8]>(
&mut rpc_hex(if !res.as_hex.is_empty() { &res.as_hex } else { &res.pruned_as_hex })?
.as_ref(),
)
.map_err(|_| match hash_hex(&res.tx_hash) {
Ok(hash) => RpcError::InvalidTransaction(hash),
Err(err) => err,
})?;
// https://github.com/monero-project/monero/issues/8311
if res.as_hex.is_empty() {
match tx.prefix.inputs.get(0) {
Some(Input::Gen { .. }) => (),
_ => Err(RpcError::PrunedTransaction)?,
}
}
// This does run a few keccak256 hashes, which is pointless if the node is trusted
// In exchange, this provides resilience against invalid/malicious nodes
if tx.hash() != hashes[i] {
Err(RpcError::InvalidNode)?;
}
Ok(tx)
})
.collect()
}
pub async fn get_transaction(&self, tx: [u8; 32]) -> Result<Transaction, RpcError> {
self.get_transactions(&[tx]).await.map(|mut txs| txs.swap_remove(0))
}
/// Get the hash of a block from the node by the block's numbers.
/// This function does not verify the returned block hash is actually for the number in question.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
#[derive(Deserialize, Debug)]
struct BlockHeaderResponse {
hash: String,
}
#[derive(Deserialize, Debug)]
struct BlockHeaderByHeightResponse {
block_header: BlockHeaderResponse,
}
let header: BlockHeaderByHeightResponse =
self.json_rpc_call("get_block_header_by_height", Some(json!({ "height": number }))).await?;
rpc_hex(&header.block_header.hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
/// Get a block from the node by its hash.
/// This function does not verify the returned block actually has the hash in question.
pub async fn get_block(&self, hash: [u8; 32]) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await?;
let block =
Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref()).map_err(|_| RpcError::InvalidNode)?;
if block.hash() != hash {
Err(RpcError::InvalidNode)?;
}
Ok(block)
}
pub async fn get_block_by_number(&self, number: usize) -> Result<Block, RpcError> {
match self.get_block(self.get_block_hash(number).await?).await {
Ok(block) => {
// Make sure this is actually the block for this number
match block.miner_tx.prefix.inputs.get(0) {
Some(Input::Gen(actual)) => {
if usize::try_from(*actual).unwrap() == number {
Ok(block)
} else {
Err(RpcError::InvalidNode)
}
}
Some(Input::ToKey { .. }) | None => Err(RpcError::InvalidNode),
}
}
e => e,
}
}
pub async fn get_block_transactions(&self, hash: [u8; 32]) -> Result<Vec<Transaction>, RpcError> {
let block = self.get_block(hash).await?;
let mut res = vec![block.miner_tx];
res.extend(self.get_transactions(&block.txs).await?);
Ok(res)
}
pub async fn get_block_transactions_by_number(
&self,
number: usize,
) -> Result<Vec<Transaction>, RpcError> {
self.get_block_transactions(self.get_block_hash(number).await?).await
}
/// Get the output indexes of the specified transaction.
pub async fn get_o_indexes(&self, hash: [u8; 32]) -> Result<Vec<u64>, RpcError> {
/*
TODO: Use these when a suitable epee serde lib exists
#[derive(Serialize, Debug)]
struct Request {
txid: [u8; 32],
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct OIndexes {
o_indexes: Vec<u64>,
}
*/
// Given the immaturity of Rust epee libraries, this is a homegrown one which is only validated
// to work against this specific function
// Header for EPEE, an 8-byte magic and a version
const EPEE_HEADER: &[u8] = b"\x01\x11\x01\x01\x01\x01\x02\x01\x01";
let mut request = EPEE_HEADER.to_vec();
// Number of fields (shifted over 2 bits as the 2 LSBs are reserved for metadata)
request.push(1 << 2);
// Length of field name
request.push(4);
// Field name
request.extend(b"txid");
// Type of field
request.push(10);
// Length of string, since this byte array is technically a string
request.push(32 << 2);
// The "string"
request.extend(hash);
let indexes_buf = self.bin_call("get_o_indexes.bin", request).await?;
let mut indexes: &[u8] = indexes_buf.as_ref();
(|| {
if read_bytes::<_, { EPEE_HEADER.len() }>(&mut indexes)? != EPEE_HEADER {
Err(io::Error::new(io::ErrorKind::Other, "invalid header"))?;
}
let read_object = |reader: &mut &[u8]| {
let fields = read_byte(reader)? >> 2;
for _ in 0 .. fields {
let name_len = read_byte(reader)?;
let name = read_raw_vec(read_byte, name_len.into(), reader)?;
let type_with_array_flag = read_byte(reader)?;
let kind = type_with_array_flag & (!0x80);
let iters = if type_with_array_flag != kind { read_epee_vi(reader)? } else { 1 };
if (&name == b"o_indexes") && (kind != 5) {
Err(io::Error::new(io::ErrorKind::Other, "o_indexes weren't u64s"))?;
}
let f = match kind {
// i64
1 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// i32
2 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// i16
3 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// i8
4 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// u64
5 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// u32
6 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// u16
7 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// u8
8 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// double
9 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// string, or any collection of bytes
10 => |reader: &mut &[u8]| {
let len = read_epee_vi(reader)?;
read_raw_vec(
read_byte,
len
.try_into()
.map_err(|_| io::Error::new(io::ErrorKind::Other, "u64 length exceeded usize"))?,
reader,
)
},
// bool
11 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// object, errors here as it shouldn't be used on this call
12 => |_: &mut &[u8]| {
Err(io::Error::new(
io::ErrorKind::Other,
"node used object in reply to get_o_indexes",
))
},
// array, so far unused
13 => |_: &mut &[u8]| {
Err(io::Error::new(io::ErrorKind::Other, "node used the unused array type"))
},
_ => {
|_: &mut &[u8]| Err(io::Error::new(io::ErrorKind::Other, "node used an invalid type"))
}
};
let mut res = vec![];
for _ in 0 .. iters {
res.push(f(reader)?);
}
let mut actual_res = Vec::with_capacity(res.len());
if &name == b"o_indexes" {
for o_index in res {
actual_res.push(u64::from_le_bytes(o_index.try_into().map_err(|_| {
io::Error::new(io::ErrorKind::Other, "node didn't provide 8 bytes for a u64")
})?));
}
return Ok(actual_res);
}
}
// Didn't return a response with o_indexes
// TODO: Check if this didn't have o_indexes because it's an error response
Err(io::Error::new(io::ErrorKind::Other, "response didn't contain o_indexes"))
};
read_object(&mut indexes)
})()
.map_err(|_| RpcError::InvalidNode)
}
/// Get the output distribution, from the specified height to the specified height (both
/// inclusive).
pub async fn get_output_distribution(
&self,
from: usize,
to: usize,
) -> Result<Vec<u64>, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distribution {
distribution: Vec<u64>,
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distributions {
distributions: Vec<Distribution>,
}
let mut distributions: Distributions = self
.json_rpc_call(
"get_output_distribution",
Some(json!({
"binary": false,
"amounts": [0],
"cumulative": true,
"from_height": from,
"to_height": to,
})),
)
.await?;
Ok(distributions.distributions.swap_remove(0).distribution)
}
/// Get the specified outputs from the RingCT (zero-amount) pool, but only return them if their
/// timelock has been satisfied. This is distinct from being free of the 10-block lock applied to
/// all Monero transactions.
pub async fn get_unlocked_outputs(
&self,
indexes: &[u64],
height: usize,
) -> Result<Vec<Option<[EdwardsPoint; 2]>>, RpcError> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
txid: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = self
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": 0,
"index": o
})).collect::<Vec<_>>()
})),
)
.await?;
let txs = self
.get_transactions(
&outs
.outs
.iter()
.map(|out| rpc_hex(&out.txid)?.try_into().map_err(|_| RpcError::InvalidNode))
.collect::<Result<Vec<_>, _>>()?,
)
.await?;
// TODO: https://github.com/serai-dex/serai/issues/104
outs
.outs
.iter()
.enumerate()
.map(|(i, out)| {
Ok(
Some([rpc_point(&out.key)?, rpc_point(&out.mask)?])
.filter(|_| Timelock::Block(height) >= txs[i].prefix.timelock),
)
})
.collect()
}
/// Get the currently estimated fee from the node. This may be manipulated to unsafe levels and
/// MUST be sanity checked.
// TODO: Take a sanity check argument
pub async fn get_fee(&self) -> Result<Fee, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct FeeResponse {
fee: u64,
quantization_mask: u64,
}
let res: FeeResponse = self.json_rpc_call("get_fee_estimate", None).await?;
Ok(Fee { per_weight: res.fee, mask: res.quantization_mask })
}
pub async fn publish_transaction(&self, tx: &Transaction) -> Result<(), RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SendRawResponse {
status: String,
double_spend: bool,
fee_too_low: bool,
invalid_input: bool,
invalid_output: bool,
low_mixin: bool,
not_relayed: bool,
overspend: bool,
too_big: bool,
too_few_outputs: bool,
reason: String,
}
let res: SendRawResponse = self
.rpc_call("send_raw_transaction", Some(json!({ "tx_as_hex": hex::encode(tx.serialize()) })))
.await?;
if res.status != "OK" {
Err(RpcError::InvalidTransaction(tx.hash()))?;
}
Ok(())
}
pub async fn generate_blocks(&self, address: &str, block_count: usize) -> Result<(), RpcError> {
self
.rpc_call::<_, EmptyResponse>(
"json_rpc",
Some(json!({
"method": "generateblocks",
"params": {
"wallet_address": address,
"amount_of_blocks": block_count
},
})),
)
.await?;
Ok(())
}
}

View File

@@ -1,156 +0,0 @@
use core::fmt::Debug;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use curve25519_dalek::{
scalar::Scalar,
edwards::{EdwardsPoint, CompressedEdwardsY},
};
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
pub(crate) fn varint_len(varint: usize) -> usize {
((usize::try_from(usize::BITS - varint.leading_zeros()).unwrap().saturating_sub(1)) / 7) + 1
}
pub(crate) fn write_byte<W: Write>(byte: &u8, w: &mut W) -> io::Result<()> {
w.write_all(&[*byte])
}
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {
let mut varint = *varint;
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
if varint != 0 {
b |= VARINT_CONTINUATION_MASK;
}
write_byte(&b, w)?;
varint != 0
} {}
Ok(())
}
pub(crate) fn write_scalar<W: Write>(scalar: &Scalar, w: &mut W) -> io::Result<()> {
w.write_all(&scalar.to_bytes())
}
pub(crate) fn write_point<W: Write>(point: &EdwardsPoint, w: &mut W) -> io::Result<()> {
w.write_all(&point.compress().to_bytes())
}
pub(crate) fn write_raw_vec<T, W: Write, F: Fn(&T, &mut W) -> io::Result<()>>(
f: F,
values: &[T],
w: &mut W,
) -> io::Result<()> {
for value in values {
f(value, w)?;
}
Ok(())
}
pub(crate) fn write_vec<T, W: Write, F: Fn(&T, &mut W) -> io::Result<()>>(
f: F,
values: &[T],
w: &mut W,
) -> io::Result<()> {
write_varint(&values.len().try_into().unwrap(), w)?;
write_raw_vec(f, values, w)
}
pub(crate) fn read_bytes<R: Read, const N: usize>(r: &mut R) -> io::Result<[u8; N]> {
let mut res = [0; N];
r.read_exact(&mut res)?;
Ok(res)
}
pub(crate) fn read_byte<R: Read>(r: &mut R) -> io::Result<u8> {
Ok(read_bytes::<_, 1>(r)?[0])
}
pub(crate) fn read_u16<R: Read>(r: &mut R) -> io::Result<u16> {
read_bytes(r).map(u16::from_le_bytes)
}
pub(crate) fn read_u32<R: Read>(r: &mut R) -> io::Result<u32> {
read_bytes(r).map(u32::from_le_bytes)
}
pub(crate) fn read_u64<R: Read>(r: &mut R) -> io::Result<u64> {
read_bytes(r).map(u64::from_le_bytes)
}
pub(crate) fn read_varint<R: Read>(r: &mut R) -> io::Result<u64> {
let mut bits = 0;
let mut res = 0;
while {
let b = read_byte(r)?;
if (bits != 0) && (b == 0) {
Err(io::Error::new(io::ErrorKind::Other, "non-canonical varint"))?;
}
if ((bits + 7) > 64) && (b >= (1 << (64 - bits))) {
Err(io::Error::new(io::ErrorKind::Other, "varint overflow"))?;
}
res += u64::from(b & (!VARINT_CONTINUATION_MASK)) << bits;
bits += 7;
b & VARINT_CONTINUATION_MASK == VARINT_CONTINUATION_MASK
} {}
Ok(res)
}
// All scalar fields supported by monero-serai are checked to be canonical for valid transactions
// While from_bytes_mod_order would be more flexible, it's not currently needed and would be
// inaccurate to include now. While casting a wide net may be preferable, it'd also be inaccurate
// for now. There's also further edge cases as noted by
// https://github.com/monero-project/monero/issues/8438, where some scalars had an archaic
// reduction applied
pub(crate) fn read_scalar<R: Read>(r: &mut R) -> io::Result<Scalar> {
Scalar::from_canonical_bytes(read_bytes(r)?)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "unreduced scalar"))
}
pub(crate) fn read_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
let bytes = read_bytes(r)?;
CompressedEdwardsY(bytes)
.decompress()
// Ban points which are either unreduced or -0
.filter(|point| point.compress().to_bytes() == bytes)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid point"))
}
pub(crate) fn read_torsion_free_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
read_point(r)
.ok()
.filter(EdwardsPoint::is_torsion_free)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid point"))
}
pub(crate) fn read_raw_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
len: usize,
r: &mut R,
) -> io::Result<Vec<T>> {
let mut res = vec![];
for _ in 0 .. len {
res.push(f(r)?);
}
Ok(res)
}
pub(crate) fn read_array<R: Read, T: Debug, F: Fn(&mut R) -> io::Result<T>, const N: usize>(
f: F,
r: &mut R,
) -> io::Result<[T; N]> {
read_raw_vec(f, N, r).map(|vec| vec.try_into().unwrap())
}
pub(crate) fn read_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
r: &mut R,
) -> io::Result<Vec<T>> {
read_raw_vec(f, read_varint(r)?.try_into().unwrap(), r)
}

View File

@@ -1,92 +0,0 @@
use hex_literal::hex;
use rand_core::OsRng;
use curve25519_dalek::{scalar::Scalar, edwards::CompressedEdwardsY};
use multiexp::BatchVerifier;
use crate::{
Commitment, random_scalar,
ringct::bulletproofs::{Bulletproofs, original::OriginalStruct},
};
#[test]
fn bulletproofs_vector() {
let scalar = |scalar| Scalar::from_canonical_bytes(scalar).unwrap();
let point = |point| CompressedEdwardsY(point).decompress().unwrap();
// Generated from Monero
assert!(Bulletproofs::Original(OriginalStruct {
A: point(hex!("ef32c0b9551b804decdcb107eb22aa715b7ce259bf3c5cac20e24dfa6b28ac71")),
S: point(hex!("e1285960861783574ee2b689ae53622834eb0b035d6943103f960cd23e063fa0")),
T1: point(hex!("4ea07735f184ba159d0e0eb662bac8cde3eb7d39f31e567b0fbda3aa23fe5620")),
T2: point(hex!("b8390aa4b60b255630d40e592f55ec6b7ab5e3a96bfcdcd6f1cd1d2fc95f441e")),
taux: scalar(hex!("5957dba8ea9afb23d6e81cc048a92f2d502c10c749dc1b2bd148ae8d41ec7107")),
mu: scalar(hex!("923023b234c2e64774b820b4961f7181f6c1dc152c438643e5a25b0bf271bc02")),
L: vec![
point(hex!("c45f656316b9ebf9d357fb6a9f85b5f09e0b991dd50a6e0ae9b02de3946c9d99")),
point(hex!("9304d2bf0f27183a2acc58cc755a0348da11bd345485fda41b872fee89e72aac")),
point(hex!("1bb8b71925d155dd9569f64129ea049d6149fdc4e7a42a86d9478801d922129b")),
point(hex!("5756a7bf887aa72b9a952f92f47182122e7b19d89e5dd434c747492b00e1c6b7")),
point(hex!("6e497c910d102592830555356af5ff8340e8d141e3fb60ea24cfa587e964f07d")),
point(hex!("f4fa3898e7b08e039183d444f3d55040f3c790ed806cb314de49f3068bdbb218")),
point(hex!("0bbc37597c3ead517a3841e159c8b7b79a5ceaee24b2a9a20350127aab428713")),
],
R: vec![
point(hex!("609420ba1702781692e84accfd225adb3d077aedc3cf8125563400466b52dbd9")),
point(hex!("fb4e1d079e7a2b0ec14f7e2a3943bf50b6d60bc346a54fcf562fb234b342abf8")),
point(hex!("6ae3ac97289c48ce95b9c557289e82a34932055f7f5e32720139824fe81b12e5")),
point(hex!("d071cc2ffbdab2d840326ad15f68c01da6482271cae3cf644670d1632f29a15c")),
point(hex!("e52a1754b95e1060589ba7ce0c43d0060820ebfc0d49dc52884bc3c65ad18af5")),
point(hex!("41573b06140108539957df71aceb4b1816d2409ce896659aa5c86f037ca5e851")),
point(hex!("a65970b2cc3c7b08b2b5b739dbc8e71e646783c41c625e2a5b1535e3d2e0f742")),
],
a: scalar(hex!("0077c5383dea44d3cd1bc74849376bd60679612dc4b945255822457fa0c0a209")),
b: scalar(hex!("fe80cf5756473482581e1d38644007793ddc66fdeb9404ec1689a907e4863302")),
t: scalar(hex!("40dfb08e09249040df997851db311bd6827c26e87d6f0f332c55be8eef10e603"))
})
.verify(
&mut OsRng,
&[
// For some reason, these vectors are * INV_EIGHT
point(hex!("8e8f23f315edae4f6c2f948d9a861e0ae32d356b933cd11d2f0e031ac744c41f"))
.mul_by_cofactor(),
point(hex!("2829cbd025aa54cd6e1b59a032564f22f0b2e5627f7f2c4297f90da438b5510f"))
.mul_by_cofactor(),
]
));
}
macro_rules! bulletproofs_tests {
($name: ident, $max: ident, $plus: literal) => {
#[test]
fn $name() {
// Create Bulletproofs for all possible output quantities
let mut verifier = BatchVerifier::new(16);
for i in 1 .. 17 {
let commitments = (1 ..= i)
.map(|i| Commitment::new(random_scalar(&mut OsRng), u64::try_from(i).unwrap()))
.collect::<Vec<_>>();
let bp = Bulletproofs::prove(&mut OsRng, &commitments, $plus).unwrap();
let commitments = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
assert!(bp.verify(&mut OsRng, &commitments));
assert!(bp.batch_verify(&mut OsRng, &mut verifier, i, &commitments));
}
assert!(verifier.verify_vartime());
}
#[test]
fn $max() {
// Check Bulletproofs errors if we try to prove for too many outputs
let mut commitments = vec![];
for _ in 0 .. 17 {
commitments.push(Commitment::new(Scalar::zero(), 0));
}
assert!(Bulletproofs::prove(&mut OsRng, &commitments, $plus).is_err());
}
};
}
bulletproofs_tests!(bulletproofs, bulletproofs_max, false);
bulletproofs_tests!(bulletproofs_plus, bulletproofs_plus_max, true);

View File

@@ -1,127 +0,0 @@
use core::ops::Deref;
#[cfg(feature = "multisig")]
use std_shims::sync::Arc;
#[cfg(feature = "multisig")]
use std::sync::RwLock;
use zeroize::Zeroizing;
use rand_core::{RngCore, OsRng};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar};
#[cfg(feature = "multisig")]
use transcript::{Transcript, RecommendedTranscript};
#[cfg(feature = "multisig")]
use frost::curve::Ed25519;
use crate::{
Commitment, random_scalar,
wallet::Decoys,
ringct::{
generate_key_image,
clsag::{ClsagInput, Clsag},
},
};
#[cfg(feature = "multisig")]
use crate::ringct::clsag::{ClsagDetails, ClsagMultisig};
#[cfg(feature = "multisig")]
use frost::{
Participant,
tests::{key_gen, algorithm_machines, sign},
};
const RING_LEN: u64 = 11;
const AMOUNT: u64 = 1337;
#[cfg(feature = "multisig")]
const RING_INDEX: u8 = 3;
#[test]
fn clsag() {
for real in 0 .. RING_LEN {
let msg = [1; 32];
let mut secrets = (Zeroizing::new(Scalar::zero()), Scalar::zero());
let mut ring = vec![];
for i in 0 .. RING_LEN {
let dest = Zeroizing::new(random_scalar(&mut OsRng));
let mask = random_scalar(&mut OsRng);
let amount = if i == real {
secrets = (dest.clone(), mask);
AMOUNT
} else {
OsRng.next_u64()
};
ring
.push([dest.deref() * &ED25519_BASEPOINT_TABLE, Commitment::new(mask, amount).calculate()]);
}
let image = generate_key_image(&secrets.0);
let (clsag, pseudo_out) = Clsag::sign(
&mut OsRng,
vec![(
secrets.0,
image,
ClsagInput::new(
Commitment::new(secrets.1, AMOUNT),
Decoys {
i: u8::try_from(real).unwrap(),
offsets: (1 ..= RING_LEN).collect(),
ring: ring.clone(),
},
)
.unwrap(),
)],
random_scalar(&mut OsRng),
msg,
)
.swap_remove(0);
clsag.verify(&ring, &image, &pseudo_out, &msg).unwrap();
}
}
#[cfg(feature = "multisig")]
#[test]
fn clsag_multisig() {
let keys = key_gen::<_, Ed25519>(&mut OsRng);
let randomness = random_scalar(&mut OsRng);
let mut ring = vec![];
for i in 0 .. RING_LEN {
let dest;
let mask;
let amount = if i == u64::from(RING_INDEX) {
dest = keys[&Participant::new(1).unwrap()].group_key().0;
mask = randomness;
AMOUNT
} else {
dest = &random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE;
mask = random_scalar(&mut OsRng);
OsRng.next_u64()
};
ring.push([dest, Commitment::new(mask, amount).calculate()]);
}
let mask_sum = random_scalar(&mut OsRng);
let algorithm = ClsagMultisig::new(
RecommendedTranscript::new(b"Monero Serai CLSAG Test"),
keys[&Participant::new(1).unwrap()].group_key().0,
Arc::new(RwLock::new(Some(ClsagDetails::new(
ClsagInput::new(
Commitment::new(randomness, AMOUNT),
Decoys { i: RING_INDEX, offsets: (1 ..= RING_LEN).collect(), ring: ring.clone() },
)
.unwrap(),
mask_sum,
)))),
);
sign(
&mut OsRng,
algorithm.clone(),
keys.clone(),
algorithm_machines(&mut OsRng, algorithm, &keys),
&[1; 32],
);
}

View File

@@ -1,4 +0,0 @@
mod clsag;
mod bulletproofs;
mod address;
mod seed;

View File

@@ -1,378 +0,0 @@
use core::cmp::Ordering;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use curve25519_dalek::{
scalar::Scalar,
edwards::{EdwardsPoint, CompressedEdwardsY},
};
use crate::{
Protocol, hash,
serialize::*,
ringct::{RctBase, RctPrunable, RctSignatures},
};
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Input {
Gen(u64),
ToKey { amount: Option<u64>, key_offsets: Vec<u64>, key_image: EdwardsPoint },
}
impl Input {
// Worst-case predictive len
pub(crate) fn fee_weight(ring_len: usize) -> usize {
// Uses 1 byte for the VarInt amount due to amount being 0
// Uses 1 byte for the VarInt encoding of the length of the ring as well
1 + 1 + 1 + (8 * ring_len) + 32
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::Gen(height) => {
w.write_all(&[255])?;
write_varint(height, w)
}
Self::ToKey { amount, key_offsets, key_image } => {
w.write_all(&[2])?;
write_varint(&amount.unwrap_or(0), w)?;
write_vec(write_varint, key_offsets, w)?;
write_point(key_image, w)
}
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(interpret_as_rct: bool, r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
255 => Self::Gen(read_varint(r)?),
2 => {
let amount = read_varint(r)?;
let amount = if (amount == 0) && interpret_as_rct { None } else { Some(amount) };
Self::ToKey {
amount,
key_offsets: read_vec(read_varint, r)?,
key_image: read_torsion_free_point(r)?,
}
}
_ => {
Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown/unused input type"))?
}
})
}
}
// Doesn't bother moving to an enum for the unused Script classes
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Output {
pub amount: Option<u64>,
pub key: CompressedEdwardsY,
pub view_tag: Option<u8>,
}
impl Output {
pub(crate) fn fee_weight() -> usize {
1 + 1 + 32 + 1
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.amount.unwrap_or(0), w)?;
w.write_all(&[2 + u8::from(self.view_tag.is_some())])?;
w.write_all(&self.key.to_bytes())?;
if let Some(view_tag) = self.view_tag {
w.write_all(&[view_tag])?;
}
Ok(())
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(8 + 1 + 32);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(interpret_as_rct: bool, r: &mut R) -> io::Result<Self> {
let amount = read_varint(r)?;
let amount = if interpret_as_rct {
if amount != 0 {
Err(io::Error::new(io::ErrorKind::Other, "RCT TX output wasn't 0"))?;
}
None
} else {
Some(amount)
};
let view_tag = match read_byte(r)? {
2 => false,
3 => true,
_ => Err(io::Error::new(
io::ErrorKind::Other,
"Tried to deserialize unknown/unused output type",
))?,
};
Ok(Self {
amount,
key: CompressedEdwardsY(read_bytes(r)?),
view_tag: if view_tag { Some(read_byte(r)?) } else { None },
})
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum Timelock {
None,
Block(usize),
Time(u64),
}
impl Timelock {
fn from_raw(raw: u64) -> Self {
if raw == 0 {
Self::None
} else if raw < 500_000_000 {
Self::Block(usize::try_from(raw).unwrap())
} else {
Self::Time(raw)
}
}
fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(
&match self {
Self::None => 0,
Self::Block(block) => (*block).try_into().unwrap(),
Self::Time(time) => *time,
},
w,
)
}
}
impl PartialOrd for Timelock {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
match (self, other) {
(Self::None, _) => Some(Ordering::Less),
(Self::Block(a), Self::Block(b)) => a.partial_cmp(b),
(Self::Time(a), Self::Time(b)) => a.partial_cmp(b),
_ => None,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct TransactionPrefix {
pub version: u64,
pub timelock: Timelock,
pub inputs: Vec<Input>,
pub outputs: Vec<Output>,
pub extra: Vec<u8>,
}
impl TransactionPrefix {
pub(crate) fn fee_weight(ring_len: usize, inputs: usize, outputs: usize, extra: usize) -> usize {
// Assumes Timelock::None since this library won't let you create a TX with a timelock
1 + 1 +
varint_len(inputs) +
(inputs * Input::fee_weight(ring_len)) +
1 +
(outputs * Output::fee_weight()) +
varint_len(extra) +
extra
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.version, w)?;
self.timelock.write(w)?;
write_vec(Input::write, &self.inputs, w)?;
write_vec(Output::write, &self.outputs, w)?;
write_varint(&self.extra.len().try_into().unwrap(), w)?;
w.write_all(&self.extra)
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
let version = read_varint(r)?;
// TODO: Create an enum out of version
if (version == 0) || (version > 2) {
Err(io::Error::new(io::ErrorKind::Other, "unrecognized transaction version"))?;
}
let timelock = Timelock::from_raw(read_varint(r)?);
let inputs = read_vec(|r| Input::read(version == 2, r), r)?;
if inputs.is_empty() {
Err(io::Error::new(io::ErrorKind::Other, "transaction had no inputs"))?;
}
let is_miner_tx = matches!(inputs[0], Input::Gen { .. });
let mut prefix = Self {
version,
timelock,
inputs,
outputs: read_vec(|r| Output::read((!is_miner_tx) && (version == 2), r), r)?,
extra: vec![],
};
prefix.extra = read_vec(read_byte, r)?;
Ok(prefix)
}
pub fn hash(&self) -> [u8; 32] {
hash(&self.serialize())
}
}
/// Monero transaction. For version 1, rct_signatures still contains an accurate fee value.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Transaction {
pub prefix: TransactionPrefix,
pub signatures: Vec<Vec<(Scalar, Scalar)>>,
pub rct_signatures: RctSignatures,
}
impl Transaction {
pub(crate) fn fee_weight(
protocol: Protocol,
inputs: usize,
outputs: usize,
extra: usize,
) -> usize {
TransactionPrefix::fee_weight(protocol.ring_len(), inputs, outputs, extra) +
RctSignatures::fee_weight(protocol, inputs, outputs)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.prefix.write(w)?;
if self.prefix.version == 1 {
for sigs in &self.signatures {
for sig in sigs {
write_scalar(&sig.0, w)?;
write_scalar(&sig.1, w)?;
}
}
Ok(())
} else if self.prefix.version == 2 {
self.rct_signatures.write(w)
} else {
panic!("Serializing a transaction with an unknown version");
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(2048);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
let prefix = TransactionPrefix::read(r)?;
let mut signatures = vec![];
let mut rct_signatures = RctSignatures {
base: RctBase { fee: 0, encrypted_amounts: vec![], pseudo_outs: vec![], commitments: vec![] },
prunable: RctPrunable::Null,
};
if prefix.version == 1 {
signatures = prefix
.inputs
.iter()
.filter_map(|input| match input {
Input::ToKey { key_offsets, .. } => Some(
key_offsets
.iter()
.map(|_| Ok((read_scalar(r)?, read_scalar(r)?)))
.collect::<Result<_, io::Error>>(),
),
_ => None,
})
.collect::<Result<_, _>>()?;
rct_signatures.base.fee = prefix
.inputs
.iter()
.map(|input| match input {
Input::Gen(..) => 0,
Input::ToKey { amount, .. } => amount.unwrap(),
})
.sum::<u64>()
.saturating_sub(prefix.outputs.iter().map(|output| output.amount.unwrap()).sum());
} else if prefix.version == 2 {
rct_signatures = RctSignatures::read(
prefix
.inputs
.iter()
.map(|input| match input {
Input::Gen(_) => 0,
Input::ToKey { key_offsets, .. } => key_offsets.len(),
})
.collect(),
prefix.outputs.len(),
r,
)?;
} else {
Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown version"))?;
}
Ok(Self { prefix, signatures, rct_signatures })
}
pub fn hash(&self) -> [u8; 32] {
let mut buf = Vec::with_capacity(2048);
if self.prefix.version == 1 {
self.write(&mut buf).unwrap();
hash(&buf)
} else {
let mut hashes = Vec::with_capacity(96);
hashes.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hashes.extend(hash(&buf));
buf.clear();
hashes.extend(&match self.rct_signatures.prunable {
RctPrunable::Null => [0; 32],
RctPrunable::MlsagBorromean { .. } |
RctPrunable::MlsagBulletproofs { .. } |
RctPrunable::Clsag { .. } => {
self.rct_signatures.prunable.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hash(&buf)
}
});
hash(&hashes)
}
}
/// Calculate the hash of this transaction as needed for signing it.
pub fn signature_hash(&self) -> [u8; 32] {
let mut buf = Vec::with_capacity(2048);
let mut sig_hash = Vec::with_capacity(96);
sig_hash.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
sig_hash.extend(hash(&buf));
buf.clear();
self.rct_signatures.prunable.signature_write(&mut buf).unwrap();
sig_hash.extend(hash(&buf));
hash(&sig_hash)
}
}

View File

@@ -1,311 +0,0 @@
use core::{marker::PhantomData, fmt::Debug};
use std_shims::string::{String, ToString};
use zeroize::Zeroize;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use base58_monero::base58::{encode_check, decode_check};
/// The network this address is for.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum Network {
Mainnet,
Testnet,
Stagenet,
}
/// The address type, supporting the officially documented addresses, along with
/// [Featured Addresses](https://gist.github.com/kayabaNerve/01c50bbc35441e0bbdcee63a9d823789).
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum AddressType {
Standard,
Integrated([u8; 8]),
Subaddress,
Featured { subaddress: bool, payment_id: Option<[u8; 8]>, guaranteed: bool },
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct SubaddressIndex {
pub(crate) account: u32,
pub(crate) address: u32,
}
impl SubaddressIndex {
pub const fn new(account: u32, address: u32) -> Option<Self> {
if (account == 0) && (address == 0) {
return None;
}
Some(Self { account, address })
}
pub const fn account(&self) -> u32 {
self.account
}
pub const fn address(&self) -> u32 {
self.address
}
}
/// Address specification. Used internally to create addresses.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum AddressSpec {
Standard,
Integrated([u8; 8]),
Subaddress(SubaddressIndex),
Featured { subaddress: Option<SubaddressIndex>, payment_id: Option<[u8; 8]>, guaranteed: bool },
}
impl AddressType {
pub const fn is_subaddress(&self) -> bool {
matches!(self, Self::Subaddress) || matches!(self, Self::Featured { subaddress: true, .. })
}
pub const fn payment_id(&self) -> Option<[u8; 8]> {
if let Self::Integrated(id) = self {
Some(*id)
} else if let Self::Featured { payment_id, .. } = self {
*payment_id
} else {
None
}
}
pub const fn is_guaranteed(&self) -> bool {
matches!(self, Self::Featured { guaranteed: true, .. })
}
}
/// A type which returns the byte for a given address.
pub trait AddressBytes: Clone + Copy + PartialEq + Eq + Debug {
fn network_bytes(network: Network) -> (u8, u8, u8, u8);
}
/// Address bytes for Monero.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct MoneroAddressBytes;
impl AddressBytes for MoneroAddressBytes {
fn network_bytes(network: Network) -> (u8, u8, u8, u8) {
match network {
Network::Mainnet => (18, 19, 42, 70),
Network::Testnet => (53, 54, 63, 111),
Network::Stagenet => (24, 25, 36, 86),
}
}
}
/// Address metadata.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct AddressMeta<B: AddressBytes> {
_bytes: PhantomData<B>,
pub network: Network,
pub kind: AddressType,
}
impl<B: AddressBytes> Zeroize for AddressMeta<B> {
fn zeroize(&mut self) {
self.network.zeroize();
self.kind.zeroize();
}
}
/// Error when decoding an address.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum AddressError {
#[cfg_attr(feature = "std", error("invalid address byte"))]
InvalidByte,
#[cfg_attr(feature = "std", error("invalid address encoding"))]
InvalidEncoding,
#[cfg_attr(feature = "std", error("invalid length"))]
InvalidLength,
#[cfg_attr(feature = "std", error("invalid key"))]
InvalidKey,
#[cfg_attr(feature = "std", error("unknown features"))]
UnknownFeatures,
#[cfg_attr(feature = "std", error("different network than expected"))]
DifferentNetwork,
}
impl<B: AddressBytes> AddressMeta<B> {
#[allow(clippy::wrong_self_convention)]
fn to_byte(&self) -> u8 {
let bytes = B::network_bytes(self.network);
match self.kind {
AddressType::Standard => bytes.0,
AddressType::Integrated(_) => bytes.1,
AddressType::Subaddress => bytes.2,
AddressType::Featured { .. } => bytes.3,
}
}
/// Create an address's metadata.
pub const fn new(network: Network, kind: AddressType) -> Self {
Self { _bytes: PhantomData, network, kind }
}
// Returns an incomplete instantiation in the case of Integrated/Featured addresses
fn from_byte(byte: u8) -> Result<Self, AddressError> {
let mut meta = None;
for network in [Network::Mainnet, Network::Testnet, Network::Stagenet] {
let (standard, integrated, subaddress, featured) = B::network_bytes(network);
if let Some(kind) = match byte {
_ if byte == standard => Some(AddressType::Standard),
_ if byte == integrated => Some(AddressType::Integrated([0; 8])),
_ if byte == subaddress => Some(AddressType::Subaddress),
_ if byte == featured => {
Some(AddressType::Featured { subaddress: false, payment_id: None, guaranteed: false })
}
_ => None,
} {
meta = Some(Self::new(network, kind));
break;
}
}
meta.ok_or(AddressError::InvalidByte)
}
pub const fn is_subaddress(&self) -> bool {
self.kind.is_subaddress()
}
pub const fn payment_id(&self) -> Option<[u8; 8]> {
self.kind.payment_id()
}
pub const fn is_guaranteed(&self) -> bool {
self.kind.is_guaranteed()
}
}
/// A Monero address, composed of metadata and a spend/view key.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct Address<B: AddressBytes> {
pub meta: AddressMeta<B>,
pub spend: EdwardsPoint,
pub view: EdwardsPoint,
}
impl<B: AddressBytes> Zeroize for Address<B> {
fn zeroize(&mut self) {
self.meta.zeroize();
self.spend.zeroize();
self.view.zeroize();
}
}
impl<B: AddressBytes> ToString for Address<B> {
fn to_string(&self) -> String {
let mut data = vec![self.meta.to_byte()];
data.extend(self.spend.compress().to_bytes());
data.extend(self.view.compress().to_bytes());
if let AddressType::Featured { subaddress, payment_id, guaranteed } = self.meta.kind {
// Technically should be a VarInt, yet we don't have enough features it's needed
data.push(
u8::from(subaddress) + (u8::from(payment_id.is_some()) << 1) + (u8::from(guaranteed) << 2),
);
}
if let Some(id) = self.meta.kind.payment_id() {
data.extend(id);
}
encode_check(&data).unwrap()
}
}
impl<B: AddressBytes> Address<B> {
pub const fn new(meta: AddressMeta<B>, spend: EdwardsPoint, view: EdwardsPoint) -> Self {
Self { meta, spend, view }
}
pub fn from_str_raw(s: &str) -> Result<Self, AddressError> {
let raw = decode_check(s).map_err(|_| AddressError::InvalidEncoding)?;
if raw.len() < (1 + 32 + 32) {
Err(AddressError::InvalidLength)?;
}
let mut meta = AddressMeta::from_byte(raw[0])?;
let spend = CompressedEdwardsY(raw[1 .. 33].try_into().unwrap())
.decompress()
.ok_or(AddressError::InvalidKey)?;
let view = CompressedEdwardsY(raw[33 .. 65].try_into().unwrap())
.decompress()
.ok_or(AddressError::InvalidKey)?;
let mut read = 65;
if matches!(meta.kind, AddressType::Featured { .. }) {
if raw[read] >= (2 << 3) {
Err(AddressError::UnknownFeatures)?;
}
let subaddress = (raw[read] & 1) == 1;
let integrated = ((raw[read] >> 1) & 1) == 1;
let guaranteed = ((raw[read] >> 2) & 1) == 1;
meta.kind = AddressType::Featured {
subaddress,
payment_id: Some([0; 8]).filter(|_| integrated),
guaranteed,
};
read += 1;
}
// Update read early so we can verify the length
if meta.kind.payment_id().is_some() {
read += 8;
}
if raw.len() != read {
Err(AddressError::InvalidLength)?;
}
if let AddressType::Integrated(ref mut id) = meta.kind {
id.copy_from_slice(&raw[(read - 8) .. read]);
}
if let AddressType::Featured { payment_id: Some(ref mut id), .. } = meta.kind {
id.copy_from_slice(&raw[(read - 8) .. read]);
}
Ok(Self { meta, spend, view })
}
pub fn from_str(network: Network, s: &str) -> Result<Self, AddressError> {
Self::from_str_raw(s).and_then(|addr| {
if addr.meta.network == network {
Ok(addr)
} else {
Err(AddressError::DifferentNetwork)?
}
})
}
pub const fn network(&self) -> Network {
self.meta.network
}
pub const fn is_subaddress(&self) -> bool {
self.meta.is_subaddress()
}
pub const fn payment_id(&self) -> Option<[u8; 8]> {
self.meta.payment_id()
}
pub const fn is_guaranteed(&self) -> bool {
self.meta.is_guaranteed()
}
}
/// Instantiation of the Address type with Monero's network bytes.
pub type MoneroAddress = Address<MoneroAddressBytes>;
// Allow re-interpreting of an arbitrary address as a Monero address so it can be used with the
// rest of this library. Doesn't use From as it was conflicting with From<T> for T.
impl MoneroAddress {
pub const fn from<B: AddressBytes>(address: Address<B>) -> Self {
Self::new(
AddressMeta::new(address.meta.network, address.meta.kind),
address.spend,
address.view,
)
}
}

View File

@@ -1,281 +0,0 @@
use std_shims::{sync::OnceLock, vec::Vec, collections::HashSet};
#[cfg(not(feature = "std"))]
use std_shims::sync::Mutex;
#[cfg(feature = "std")]
use futures::lock::Mutex;
use zeroize::{Zeroize, ZeroizeOnDrop};
use rand_core::{RngCore, CryptoRng};
use rand_distr::{Distribution, Gamma};
#[cfg(not(feature = "std"))]
use rand_distr::num_traits::Float;
use curve25519_dalek::edwards::EdwardsPoint;
use crate::{
wallet::SpendableOutput,
rpc::{RpcError, RpcConnection, Rpc},
};
const LOCK_WINDOW: usize = 10;
const MATURITY: u64 = 60;
const RECENT_WINDOW: usize = 15;
const BLOCK_TIME: usize = 120;
const BLOCKS_PER_YEAR: usize = 365 * 24 * 60 * 60 / BLOCK_TIME;
#[allow(clippy::as_conversions)]
const TIP_APPLICATION: f64 = (LOCK_WINDOW * BLOCK_TIME) as f64;
// TODO: Expose an API to reset this in case a reorg occurs/the RPC fails/returns garbage
// TODO: Update this when scanning a block, as possible
static DISTRIBUTION_CELL: OnceLock<Mutex<Vec<u64>>> = OnceLock::new();
#[allow(non_snake_case)]
fn DISTRIBUTION() -> &'static Mutex<Vec<u64>> {
DISTRIBUTION_CELL.get_or_init(|| Mutex::new(Vec::with_capacity(3_000_000)))
}
#[allow(clippy::too_many_arguments, clippy::as_conversions)]
async fn select_n<'a, R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc<RPC>,
distribution: &[u64],
height: usize,
high: u64,
per_second: f64,
real: &[u64],
used: &mut HashSet<u64>,
count: usize,
) -> Result<Vec<(u64, [EdwardsPoint; 2])>, RpcError> {
if height >= rpc.get_height().await? {
// TODO: Don't use InternalError for the caller's failure
Err(RpcError::InternalError("decoys being requested from too young blocks"))?;
}
#[cfg(test)]
let mut iters = 0;
let mut confirmed = Vec::with_capacity(count);
// Retries on failure. Retries are obvious as decoys, yet should be minimal
while confirmed.len() != count {
let remaining = count - confirmed.len();
let mut candidates = Vec::with_capacity(remaining);
while candidates.len() != remaining {
#[cfg(test)]
{
iters += 1;
// This is cheap and on fresh chains, a lot of rounds may be needed
if iters == 100 {
Err(RpcError::InternalError("hit decoy selection round limit"))?;
}
}
// Use a gamma distribution
let mut age = Gamma::<f64>::new(19.28, 1.0 / 1.61).unwrap().sample(rng).exp();
if age > TIP_APPLICATION {
age -= TIP_APPLICATION;
} else {
// f64 does not have try_from available, which is why these are written with `as`
age = (rng.next_u64() % u64::try_from(RECENT_WINDOW * BLOCK_TIME).unwrap()) as f64;
}
let o = (age * per_second) as u64;
if o < high {
let i = distribution.partition_point(|s| *s < (high - 1 - o));
let prev = i.saturating_sub(1);
let n = distribution[i] - distribution[prev];
if n != 0 {
let o = distribution[prev] + (rng.next_u64() % n);
if !used.contains(&o) {
// It will either actually be used, or is unusable and this prevents trying it again
used.insert(o);
candidates.push(o);
}
}
}
}
// If this is the first time we're requesting these outputs, include the real one as well
// Prevents the node we're connected to from having a list of known decoys and then seeing a
// TX which uses all of them, with one additional output (the true spend)
let mut real_indexes = HashSet::with_capacity(real.len());
if confirmed.is_empty() {
for real in real {
candidates.push(*real);
}
// Sort candidates so the real spends aren't the ones at the end
candidates.sort();
for real in real {
real_indexes.insert(candidates.binary_search(real).unwrap());
}
}
for (i, output) in rpc.get_unlocked_outputs(&candidates, height).await?.iter_mut().enumerate() {
// Don't include the real spend as a decoy, despite requesting it
if real_indexes.contains(&i) {
continue;
}
if let Some(output) = output.take() {
confirmed.push((candidates[i], output));
}
}
}
Ok(confirmed)
}
fn offset(ring: &[u64]) -> Vec<u64> {
let mut res = vec![ring[0]];
res.resize(ring.len(), 0);
for m in (1 .. ring.len()).rev() {
res[m] = ring[m] - ring[m - 1];
}
res
}
/// Decoy data, containing the actual member as well (at index `i`).
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct Decoys {
pub i: u8,
pub offsets: Vec<u64>,
pub ring: Vec<[EdwardsPoint; 2]>,
}
impl Decoys {
pub fn len(&self) -> usize {
self.offsets.len()
}
/// Select decoys using the same distribution as Monero.
#[allow(clippy::as_conversions)]
pub async fn select<R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc<RPC>,
ring_len: usize,
height: usize,
inputs: &[SpendableOutput],
) -> Result<Vec<Self>, RpcError> {
#[cfg(not(feature = "std"))]
let mut distribution = DISTRIBUTION().lock();
#[cfg(feature = "std")]
let mut distribution = DISTRIBUTION().lock().await;
let decoy_count = ring_len - 1;
// Convert the inputs in question to the raw output data
let mut real = Vec::with_capacity(inputs.len());
let mut outputs = Vec::with_capacity(inputs.len());
for input in inputs {
real.push(input.global_index);
outputs.push((real[real.len() - 1], [input.key(), input.commitment().calculate()]));
}
if distribution.len() <= height {
let extension = rpc.get_output_distribution(distribution.len(), height).await?;
distribution.extend(extension);
}
// If asked to use an older height than previously asked, truncate to ensure accuracy
// Should never happen, yet risks desyncing if it did
distribution.truncate(height + 1); // height is inclusive, and 0 is a valid height
let high = distribution[distribution.len() - 1];
let per_second = {
let blocks = distribution.len().min(BLOCKS_PER_YEAR);
let outputs = high - distribution[distribution.len().saturating_sub(blocks + 1)];
(outputs as f64) / ((blocks * BLOCK_TIME) as f64)
};
let mut used = HashSet::<u64>::new();
for o in &outputs {
used.insert(o.0);
}
// TODO: Create a TX with less than the target amount, as allowed by the protocol
if (high - MATURITY) < u64::try_from(inputs.len() * ring_len).unwrap() {
Err(RpcError::InternalError("not enough decoy candidates"))?;
}
// Select all decoys for this transaction, assuming we generate a sane transaction
// We should almost never naturally generate an insane transaction, hence why this doesn't
// bother with an overage
let mut decoys = select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&real,
&mut used,
inputs.len() * decoy_count,
)
.await?;
real.zeroize();
let mut res = Vec::with_capacity(inputs.len());
for o in outputs {
// Grab the decoys for this specific output
let mut ring = decoys.drain((decoys.len() - decoy_count) ..).collect::<Vec<_>>();
ring.push(o);
ring.sort_by(|a, b| a.0.cmp(&b.0));
// Sanity checks are only run when 1000 outputs are available in Monero
// We run this check whenever the highest output index, which we acknowledge, is > 500
// This means we assume (for presumably test blockchains) the height being used has not had
// 500 outputs since while itself not being a sufficiently mature blockchain
// Considering Monero's p2p layer doesn't actually check transaction sanity, it should be
// fine for us to not have perfectly matching rules, especially since this code will infinite
// loop if it can't determine sanity, which is possible with sufficient inputs on
// sufficiently small chains
if high > 500 {
// Make sure the TX passes the sanity check that the median output is within the last 40%
let target_median = high * 3 / 5;
while ring[ring_len / 2].0 < target_median {
// If it's not, update the bottom half with new values to ensure the median only moves up
#[allow(clippy::needless_collect)] // Needed for ownership reasons
for removed in ring.drain(0 .. (ring_len / 2)).collect::<Vec<_>>() {
// If we removed the real spend, add it back
if removed.0 == o.0 {
ring.push(o);
} else {
// We could not remove this, saving CPU time and removing low values as
// possibilities, yet it'd increase the amount of decoys required to create this
// transaction and some removed outputs may be the best option (as we drop the first
// half, not just the bottom n)
used.remove(&removed.0);
}
}
// Select new outputs until we have a full sized ring again
ring.extend(
select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&[],
&mut used,
ring_len - ring.len(),
)
.await?,
);
ring.sort_by(|a, b| a.0.cmp(&b.0));
}
// The other sanity check rule is about duplicates, yet we already enforce unique ring
// members
}
res.push(Self {
// Binary searches for the real spend since we don't know where it sorted to
i: u8::try_from(ring.partition_point(|x| x.0 < o.0)).unwrap(),
offsets: offset(&ring.iter().map(|output| output.0).collect::<Vec<_>>()),
ring: ring.iter().map(|output| output.1).collect(),
});
}
Ok(res)
}
}

View File

@@ -1,218 +0,0 @@
use core::ops::BitXor;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use curve25519_dalek::edwards::EdwardsPoint;
use crate::serialize::{
varint_len, read_byte, read_bytes, read_varint, read_point, read_vec, write_byte, write_varint,
write_point, write_vec,
};
pub const MAX_TX_EXTRA_NONCE_SIZE: usize = 255;
pub const PAYMENT_ID_MARKER: u8 = 0;
pub const ENCRYPTED_PAYMENT_ID_MARKER: u8 = 1;
// Used as it's the highest value not interpretable as a continued VarInt
pub const ARBITRARY_DATA_MARKER: u8 = 127;
// 1 byte is used for the marker
pub const MAX_ARBITRARY_DATA_SIZE: usize = MAX_TX_EXTRA_NONCE_SIZE - 1;
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum PaymentId {
Unencrypted([u8; 32]),
Encrypted([u8; 8]),
}
impl BitXor<[u8; 8]> for PaymentId {
type Output = Self;
fn bitxor(self, bytes: [u8; 8]) -> Self {
match self {
// Don't perform the xor since this isn't intended to be encrypted with xor
Self::Unencrypted(_) => self,
Self::Encrypted(id) => {
Self::Encrypted((u64::from_le_bytes(id) ^ u64::from_le_bytes(bytes)).to_le_bytes())
}
}
}
}
impl PaymentId {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::Unencrypted(id) => {
w.write_all(&[PAYMENT_ID_MARKER])?;
w.write_all(id)?;
}
Self::Encrypted(id) => {
w.write_all(&[ENCRYPTED_PAYMENT_ID_MARKER])?;
w.write_all(id)?;
}
}
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
0 => Self::Unencrypted(read_bytes(r)?),
1 => Self::Encrypted(read_bytes(r)?),
_ => Err(io::Error::new(io::ErrorKind::Other, "unknown payment ID type"))?,
})
}
}
// Doesn't bother with padding nor MinerGate
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub enum ExtraField {
PublicKey(EdwardsPoint),
Nonce(Vec<u8>),
MergeMining(usize, [u8; 32]),
PublicKeys(Vec<EdwardsPoint>),
}
impl ExtraField {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::PublicKey(key) => {
w.write_all(&[1])?;
w.write_all(&key.compress().to_bytes())?;
}
Self::Nonce(data) => {
w.write_all(&[2])?;
write_vec(write_byte, data, w)?;
}
Self::MergeMining(height, merkle) => {
w.write_all(&[3])?;
write_varint(&u64::try_from(*height).unwrap(), w)?;
w.write_all(merkle)?;
}
Self::PublicKeys(keys) => {
w.write_all(&[4])?;
write_vec(write_point, keys, w)?;
}
}
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
1 => Self::PublicKey(read_point(r)?),
2 => Self::Nonce({
let nonce = read_vec(read_byte, r)?;
if nonce.len() > MAX_TX_EXTRA_NONCE_SIZE {
Err(io::Error::new(io::ErrorKind::Other, "too long nonce"))?;
}
nonce
}),
3 => Self::MergeMining(
usize::try_from(read_varint(r)?)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "varint for height exceeds usize"))?,
read_bytes(r)?,
),
4 => Self::PublicKeys(read_vec(read_point, r)?),
_ => Err(io::Error::new(io::ErrorKind::Other, "unknown extra field"))?,
})
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Extra(Vec<ExtraField>);
impl Extra {
pub fn keys(&self) -> Option<(EdwardsPoint, Option<Vec<EdwardsPoint>>)> {
let mut key = None;
let mut additional = None;
for field in &self.0 {
match field.clone() {
ExtraField::PublicKey(this_key) => key = key.or(Some(this_key)),
ExtraField::PublicKeys(these_additional) => {
additional = additional.or(Some(these_additional));
}
ExtraField::Nonce(_) | ExtraField::MergeMining(..) => (),
}
}
// Don't return any keys if this was non-standard and didn't include the primary key
key.map(|key| (key, additional))
}
pub fn payment_id(&self) -> Option<PaymentId> {
for field in &self.0 {
if let ExtraField::Nonce(data) = field {
return PaymentId::read::<&[u8]>(&mut data.as_ref()).ok();
}
}
None
}
pub fn data(&self) -> Vec<Vec<u8>> {
let mut res = vec![];
for field in &self.0 {
if let ExtraField::Nonce(data) = field {
if data[0] == ARBITRARY_DATA_MARKER {
res.push(data[1 ..].to_vec());
}
}
}
res
}
pub(crate) fn new(key: EdwardsPoint, additional: Vec<EdwardsPoint>) -> Self {
let mut res = Self(Vec::with_capacity(3));
res.push(ExtraField::PublicKey(key));
if !additional.is_empty() {
res.push(ExtraField::PublicKeys(additional));
}
res
}
pub(crate) fn push(&mut self, field: ExtraField) {
self.0.push(field);
}
#[rustfmt::skip]
pub(crate) fn fee_weight(
outputs: usize,
additional: bool,
payment_id: bool,
data: &[Vec<u8>]
) -> usize {
// PublicKey, key
(1 + 32) +
// PublicKeys, length, additional keys
(if additional { 1 + 1 + (outputs * 32) } else { 0 }) +
// PaymentId (Nonce), length, encrypted, ID
(if payment_id { 1 + 1 + 1 + 8 } else { 0 }) +
// Nonce, length, ARBITRARY_DATA_MARKER, data
data.iter().map(|v| 1 + varint_len(1 + v.len()) + 1 + v.len()).sum::<usize>()
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for field in &self.0 {
field.write(w)?;
}
Ok(())
}
pub fn serialize(&self) -> Vec<u8> {
let mut buf = vec![];
self.write(&mut buf).unwrap();
buf
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
let mut res = Self(vec![]);
let mut field;
while {
field = ExtraField::read(r);
field.is_ok()
} {
res.0.push(field.unwrap());
}
Ok(res)
}
}

View File

@@ -1,268 +0,0 @@
use core::ops::Deref;
use std_shims::collections::{HashSet, HashMap};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
edwards::{EdwardsPoint, CompressedEdwardsY},
};
use crate::{
hash, hash_to_scalar, serialize::write_varint, ringct::EncryptedAmount, transaction::Input,
};
pub mod extra;
pub(crate) use extra::{PaymentId, ExtraField, Extra};
/// Seed creation and parsing functionality.
pub mod seed;
/// Address encoding and decoding functionality.
pub mod address;
use address::{Network, AddressType, SubaddressIndex, AddressSpec, AddressMeta, MoneroAddress};
mod scan;
pub use scan::{ReceivedOutput, SpendableOutput, Timelocked};
pub(crate) mod decoys;
pub(crate) use decoys::Decoys;
mod send;
pub use send::{Fee, TransactionError, Change, SignableTransaction, Eventuality};
#[cfg(feature = "std")]
pub use send::SignableTransactionBuilder;
#[cfg(feature = "multisig")]
pub(crate) use send::InternalPayment;
#[cfg(feature = "multisig")]
pub use send::TransactionMachine;
fn key_image_sort(x: &EdwardsPoint, y: &EdwardsPoint) -> core::cmp::Ordering {
x.compress().to_bytes().cmp(&y.compress().to_bytes()).reverse()
}
// https://gist.github.com/kayabaNerve/8066c13f1fe1573286ba7a2fd79f6100
pub(crate) fn uniqueness(inputs: &[Input]) -> [u8; 32] {
let mut u = b"uniqueness".to_vec();
for input in inputs {
match input {
// If Gen, this should be the only input, making this loop somewhat pointless
// This works and even if there were somehow multiple inputs, it'd be a false negative
Input::Gen(height) => {
write_varint(height, &mut u).unwrap();
}
Input::ToKey { key_image, .. } => u.extend(key_image.compress().to_bytes()),
}
}
hash(&u)
}
// Hs("view_tag" || 8Ra || o), Hs(8Ra || o), and H(8Ra || 0x8d) with uniqueness inclusion in the
// Scalar as an option
#[allow(non_snake_case)]
pub(crate) fn shared_key(
uniqueness: Option<[u8; 32]>,
ecdh: EdwardsPoint,
o: usize,
) -> (u8, Scalar, [u8; 8]) {
// 8Ra
let mut output_derivation = ecdh.mul_by_cofactor().compress().to_bytes().to_vec();
let mut payment_id_xor = [0; 8];
payment_id_xor
.copy_from_slice(&hash(&[output_derivation.as_ref(), [0x8d].as_ref()].concat())[.. 8]);
// || o
write_varint(&o.try_into().unwrap(), &mut output_derivation).unwrap();
let view_tag = hash(&[b"view_tag".as_ref(), &output_derivation].concat())[0];
// uniqueness ||
let shared_key = if let Some(uniqueness) = uniqueness {
[uniqueness.as_ref(), &output_derivation].concat()
} else {
output_derivation
};
(view_tag, hash_to_scalar(&shared_key), payment_id_xor)
}
pub(crate) fn commitment_mask(shared_key: Scalar) -> Scalar {
let mut mask = b"commitment_mask".to_vec();
mask.extend(shared_key.to_bytes());
hash_to_scalar(&mask)
}
pub(crate) fn amount_encryption(amount: u64, key: Scalar) -> [u8; 8] {
let mut amount_mask = b"amount".to_vec();
amount_mask.extend(key.to_bytes());
(amount ^ u64::from_le_bytes(hash(&amount_mask)[.. 8].try_into().unwrap())).to_le_bytes()
}
// TODO: Move this under EncryptedAmount?
fn amount_decryption(amount: &EncryptedAmount, key: Scalar) -> (Scalar, u64) {
match amount {
EncryptedAmount::Original { mask, amount } => {
#[cfg(feature = "experimental")]
{
let mask_shared_sec = hash(key.as_bytes());
let mask =
Scalar::from_bytes_mod_order(*mask) - Scalar::from_bytes_mod_order(mask_shared_sec);
let amount_shared_sec = hash(&mask_shared_sec);
let amount_scalar =
Scalar::from_bytes_mod_order(*amount) - Scalar::from_bytes_mod_order(amount_shared_sec);
// d2b from rctTypes.cpp
let amount = u64::from_le_bytes(amount_scalar.to_bytes()[0 .. 8].try_into().unwrap());
(mask, amount)
}
#[cfg(not(feature = "experimental"))]
{
let _ = mask;
let _ = amount;
todo!("decrypting a legacy monero transaction's amount")
}
}
EncryptedAmount::Compact { amount } => (
commitment_mask(key),
u64::from_le_bytes(amount_encryption(u64::from_le_bytes(*amount), key)),
),
}
}
/// The private view key and public spend key, enabling scanning transactions.
#[derive(Clone, Zeroize, ZeroizeOnDrop)]
pub struct ViewPair {
spend: EdwardsPoint,
view: Zeroizing<Scalar>,
}
impl ViewPair {
pub const fn new(spend: EdwardsPoint, view: Zeroizing<Scalar>) -> Self {
Self { spend, view }
}
pub const fn spend(&self) -> EdwardsPoint {
self.spend
}
pub fn view(&self) -> EdwardsPoint {
self.view.deref() * &ED25519_BASEPOINT_TABLE
}
fn subaddress_derivation(&self, index: SubaddressIndex) -> Scalar {
hash_to_scalar(&Zeroizing::new(
[
b"SubAddr\0".as_ref(),
Zeroizing::new(self.view.to_bytes()).as_ref(),
&index.account().to_le_bytes(),
&index.address().to_le_bytes(),
]
.concat(),
))
}
fn subaddress_keys(&self, index: SubaddressIndex) -> (EdwardsPoint, EdwardsPoint) {
let scalar = self.subaddress_derivation(index);
let spend = self.spend + (&scalar * &ED25519_BASEPOINT_TABLE);
let view = self.view.deref() * spend;
(spend, view)
}
/// Returns an address with the provided specification.
pub fn address(&self, network: Network, spec: AddressSpec) -> MoneroAddress {
let mut spend = self.spend;
let mut view: EdwardsPoint = self.view.deref() * &ED25519_BASEPOINT_TABLE;
// construct the address meta
let meta = match spec {
AddressSpec::Standard => AddressMeta::new(network, AddressType::Standard),
AddressSpec::Integrated(payment_id) => {
AddressMeta::new(network, AddressType::Integrated(payment_id))
}
AddressSpec::Subaddress(index) => {
(spend, view) = self.subaddress_keys(index);
AddressMeta::new(network, AddressType::Subaddress)
}
AddressSpec::Featured { subaddress, payment_id, guaranteed } => {
if let Some(index) = subaddress {
(spend, view) = self.subaddress_keys(index);
}
AddressMeta::new(
network,
AddressType::Featured { subaddress: subaddress.is_some(), payment_id, guaranteed },
)
}
};
MoneroAddress::new(meta, spend, view)
}
}
/// Transaction scanner.
/// This scanner is capable of generating subaddresses, additionally scanning for them once they've
/// been explicitly generated. If the burning bug is attempted, any secondary outputs will be
/// ignored.
#[derive(Clone)]
pub struct Scanner {
pair: ViewPair,
// Also contains the spend key as None
pub(crate) subaddresses: HashMap<CompressedEdwardsY, Option<SubaddressIndex>>,
pub(crate) burning_bug: Option<HashSet<CompressedEdwardsY>>,
}
impl Zeroize for Scanner {
fn zeroize(&mut self) {
self.pair.zeroize();
// These may not be effective, unfortunately
for (mut key, mut value) in self.subaddresses.drain() {
key.zeroize();
value.zeroize();
}
if let Some(ref mut burning_bug) = self.burning_bug.take() {
for mut output in burning_bug.drain() {
output.zeroize();
}
}
}
}
impl Drop for Scanner {
fn drop(&mut self) {
self.zeroize();
}
}
impl ZeroizeOnDrop for Scanner {}
impl Scanner {
/// Create a Scanner from a ViewPair.
///
/// burning_bug is a HashSet of used keys, intended to prevent key reuse which would burn funds.
///
/// When an output is successfully scanned, the output key MUST be saved to disk.
///
/// When a new scanner is created, ALL saved output keys must be passed in to be secure.
///
/// If None is passed, a modified shared key derivation is used which is immune to the burning
/// bug (specifically the Guaranteed feature from Featured Addresses).
pub fn from_view(pair: ViewPair, burning_bug: Option<HashSet<CompressedEdwardsY>>) -> Self {
let mut subaddresses = HashMap::new();
subaddresses.insert(pair.spend.compress(), None);
Self { pair, subaddresses, burning_bug }
}
/// Register a subaddress.
// There used to be an address function here, yet it wasn't safe. It could generate addresses
// incompatible with the Scanner. While we could return None for that, then we have the issue
// of runtime failures to generate an address.
// Removing that API was the simplest option.
pub fn register_subaddress(&mut self, subaddress: SubaddressIndex) {
let (spend, _) = self.pair.subaddress_keys(subaddress);
self.subaddresses.insert(spend.compress(), Some(subaddress));
}
}

View File

@@ -1,474 +0,0 @@
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::{Zeroize, ZeroizeOnDrop};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
use crate::{
Commitment,
serialize::{read_byte, read_u32, read_u64, read_bytes, read_scalar, read_point, read_raw_vec},
transaction::{Input, Timelock, Transaction},
block::Block,
rpc::{RpcError, RpcConnection, Rpc},
wallet::{
PaymentId, Extra, address::SubaddressIndex, Scanner, uniqueness, shared_key, amount_decryption,
},
};
/// An absolute output ID, defined as its transaction hash and output index.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct AbsoluteId {
pub tx: [u8; 32],
pub o: u8,
}
impl AbsoluteId {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.tx)?;
w.write_all(&[self.o])
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = Vec::with_capacity(32 + 1);
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { tx: read_bytes(r)?, o: read_byte(r)? })
}
}
/// The data contained with an output.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct OutputData {
pub key: EdwardsPoint,
/// Absolute difference between the spend key and the key in this output
pub key_offset: Scalar,
pub commitment: Commitment,
}
impl OutputData {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.key.compress().to_bytes())?;
w.write_all(&self.key_offset.to_bytes())?;
w.write_all(&self.commitment.mask.to_bytes())?;
w.write_all(&self.commitment.amount.to_le_bytes())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = Vec::with_capacity(32 + 32 + 32 + 8);
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
key: read_point(r)?,
key_offset: read_scalar(r)?,
commitment: Commitment::new(read_scalar(r)?, read_u64(r)?),
})
}
}
/// The metadata for an output.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct Metadata {
/// The subaddress this output was sent to.
pub subaddress: Option<SubaddressIndex>,
/// The payment ID included with this output.
/// This will be gibberish if the payment ID wasn't intended for the recipient or wasn't included.
// Could be an Option, as extra doesn't necessarily have a payment ID, yet all Monero TXs should
// have this making it simplest for it to be as-is.
pub payment_id: [u8; 8],
/// Arbitrary data encoded in TX extra.
pub arbitrary_data: Vec<Vec<u8>>,
}
impl Metadata {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
if let Some(subaddress) = self.subaddress {
w.write_all(&[1])?;
w.write_all(&subaddress.account().to_le_bytes())?;
w.write_all(&subaddress.address().to_le_bytes())?;
} else {
w.write_all(&[0])?;
}
w.write_all(&self.payment_id)?;
w.write_all(&u32::try_from(self.arbitrary_data.len()).unwrap().to_le_bytes())?;
for part in &self.arbitrary_data {
w.write_all(&[u8::try_from(part.len()).unwrap()])?;
w.write_all(part)?;
}
Ok(())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = Vec::with_capacity(1 + 8 + 1);
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
#[allow(clippy::if_then_some_else_none)] // The Result usage makes this invalid
let subaddress = if read_byte(r)? == 1 {
Some(
SubaddressIndex::new(read_u32(r)?, read_u32(r)?)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid subaddress in metadata"))?,
)
} else {
None
};
Ok(Self {
subaddress,
payment_id: read_bytes(r)?,
arbitrary_data: {
let mut data = vec![];
for _ in 0 .. read_u32(r)? {
let len = read_byte(r)?;
data.push(read_raw_vec(read_byte, usize::from(len), r)?);
}
data
},
})
}
}
/// A received output, defined as its absolute ID, data, and metadara.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ReceivedOutput {
pub absolute: AbsoluteId,
pub data: OutputData,
pub metadata: Metadata,
}
impl ReceivedOutput {
pub fn key(&self) -> EdwardsPoint {
self.data.key
}
pub fn key_offset(&self) -> Scalar {
self.data.key_offset
}
pub fn commitment(&self) -> Commitment {
self.data.commitment.clone()
}
pub fn arbitrary_data(&self) -> &[Vec<u8>] {
&self.metadata.arbitrary_data
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.absolute.write(w)?;
self.data.write(w)?;
self.metadata.write(w)
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
absolute: AbsoluteId::read(r)?,
data: OutputData::read(r)?,
metadata: Metadata::read(r)?,
})
}
}
/// A spendable output, defined as a received output and its index on the Monero blockchain.
/// This index is dependent on the Monero blockchain and will only be known once the output is
/// included within a block. This may change if there's a reorganization.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct SpendableOutput {
pub output: ReceivedOutput,
pub global_index: u64,
}
impl SpendableOutput {
/// Update the spendable output's global index. This is intended to be called if a
/// re-organization occurred.
pub async fn refresh_global_index<RPC: RpcConnection>(
&mut self,
rpc: &Rpc<RPC>,
) -> Result<(), RpcError> {
self.global_index =
rpc.get_o_indexes(self.output.absolute.tx).await?[usize::from(self.output.absolute.o)];
Ok(())
}
pub async fn from<RPC: RpcConnection>(
rpc: &Rpc<RPC>,
output: ReceivedOutput,
) -> Result<Self, RpcError> {
let mut output = Self { output, global_index: 0 };
output.refresh_global_index(rpc).await?;
Ok(output)
}
pub fn key(&self) -> EdwardsPoint {
self.output.key()
}
pub fn key_offset(&self) -> Scalar {
self.output.key_offset()
}
pub fn commitment(&self) -> Commitment {
self.output.commitment()
}
pub fn arbitrary_data(&self) -> &[Vec<u8>] {
self.output.arbitrary_data()
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.output.write(w)?;
w.write_all(&self.global_index.to_le_bytes())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { output: ReceivedOutput::read(r)?, global_index: read_u64(r)? })
}
}
/// A collection of timelocked outputs, either received or spendable.
#[derive(Zeroize)]
pub struct Timelocked<O: Clone + Zeroize>(Timelock, Vec<O>);
impl<O: Clone + Zeroize> Drop for Timelocked<O> {
fn drop(&mut self) {
self.zeroize();
}
}
impl<O: Clone + Zeroize> ZeroizeOnDrop for Timelocked<O> {}
impl<O: Clone + Zeroize> Timelocked<O> {
pub fn timelock(&self) -> Timelock {
self.0
}
/// Return the outputs if they're not timelocked, or an empty vector if they are.
#[must_use]
pub fn not_locked(&self) -> Vec<O> {
if self.0 == Timelock::None {
return self.1.clone();
}
vec![]
}
/// Returns None if the Timelocks aren't comparable. Returns Some(vec![]) if none are unlocked.
#[must_use]
pub fn unlocked(&self, timelock: Timelock) -> Option<Vec<O>> {
// If the Timelocks are comparable, return the outputs if they're now unlocked
if self.0 <= timelock {
Some(self.1.clone())
} else {
None
}
}
#[must_use]
pub fn ignore_timelock(&self) -> Vec<O> {
self.1.clone()
}
}
impl Scanner {
/// Scan a transaction to discover the received outputs.
pub fn scan_transaction(&mut self, tx: &Transaction) -> Timelocked<ReceivedOutput> {
// Only scan RCT TXs since we can only spend RCT outputs
if tx.prefix.version != 2 {
return Timelocked(tx.prefix.timelock, vec![]);
}
let Ok(extra) = Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref()) else {
return Timelocked(tx.prefix.timelock, vec![]);
};
let Some((tx_key, additional)) = extra.keys() else {
return Timelocked(tx.prefix.timelock, vec![]);
};
let payment_id = extra.payment_id();
let mut res = vec![];
for (o, output) in tx.prefix.outputs.iter().enumerate() {
// https://github.com/serai-dex/serai/issues/106
if let Some(burning_bug) = self.burning_bug.as_ref() {
if burning_bug.contains(&output.key) {
continue;
}
}
let output_key = output.key.decompress();
if output_key.is_none() {
continue;
}
let output_key = output_key.unwrap();
for key in [Some(Some(&tx_key)), additional.as_ref().map(|additional| additional.get(o))] {
let Some(Some(key)) = key else {
if key == Some(None) {
// This is non-standard. There were additional keys, yet not one for this output
// https://github.com/monero-project/monero/
// blob/04a1e2875d6e35e27bb21497988a6c822d319c28/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L1062
// TODO: Should this return? Where does Monero set the trap handler for this exception?
continue;
} else {
break;
}
};
let (view_tag, shared_key, payment_id_xor) = shared_key(
if self.burning_bug.is_none() { Some(uniqueness(&tx.prefix.inputs)) } else { None },
self.pair.view.deref() * key,
o,
);
let payment_id =
if let Some(PaymentId::Encrypted(id)) = payment_id.map(|id| id ^ payment_id_xor) {
id
} else {
payment_id_xor
};
if let Some(actual_view_tag) = output.view_tag {
if actual_view_tag != view_tag {
continue;
}
}
// P - shared == spend
let subaddress = self
.subaddresses
.get(&(output_key - (&shared_key * &ED25519_BASEPOINT_TABLE)).compress());
if subaddress.is_none() {
continue;
}
let subaddress = *subaddress.unwrap();
// If it has torsion, it'll substract the non-torsioned shared key to a torsioned key
// We will not have a torsioned key in our HashMap of keys, so we wouldn't identify it as
// ours
// If we did though, it'd enable bypassing the included burning bug protection
assert!(output_key.is_torsion_free());
let mut key_offset = shared_key;
if let Some(subaddress) = subaddress {
key_offset += self.pair.subaddress_derivation(subaddress);
}
// Since we've found an output to us, get its amount
let mut commitment = Commitment::zero();
// Miner transaction
if let Some(amount) = output.amount {
commitment.amount = amount;
// Regular transaction
} else {
let (mask, amount) = match tx.rct_signatures.base.encrypted_amounts.get(o) {
Some(amount) => amount_decryption(amount, shared_key),
// This should never happen, yet it may be possible with miner transactions?
// Using get just decreases the possibility of a panic and lets us move on in that case
None => break,
};
// Rebuild the commitment to verify it
commitment = Commitment::new(mask, amount);
// If this is a malicious commitment, move to the next output
// Any other R value will calculate to a different spend key and are therefore ignorable
if Some(&commitment.calculate()) != tx.rct_signatures.base.commitments.get(o) {
break;
}
}
if commitment.amount != 0 {
res.push(ReceivedOutput {
absolute: AbsoluteId { tx: tx.hash(), o: o.try_into().unwrap() },
data: OutputData { key: output_key, key_offset, commitment },
metadata: Metadata { subaddress, payment_id, arbitrary_data: extra.data() },
});
if let Some(burning_bug) = self.burning_bug.as_mut() {
burning_bug.insert(output.key);
}
}
// Break to prevent public keys from being included multiple times, triggering multiple
// inclusions of the same output
break;
}
}
Timelocked(tx.prefix.timelock, res)
}
/// Scan a block to obtain its spendable outputs. Its the presence in a block giving these
/// transactions their global index, and this must be batched as asking for the index of specific
/// transactions is a dead giveaway for which transactions you successfully scanned. This
/// function obtains the output indexes for the miner transaction, incrementing from there
/// instead.
pub async fn scan<RPC: RpcConnection>(
&mut self,
rpc: &Rpc<RPC>,
block: &Block,
) -> Result<Vec<Timelocked<SpendableOutput>>, RpcError> {
let mut index = rpc.get_o_indexes(block.miner_tx.hash()).await?[0];
let mut txs = vec![block.miner_tx.clone()];
txs.extend(rpc.get_transactions(&block.txs).await?);
let map = |mut timelock: Timelocked<ReceivedOutput>, index| {
if timelock.1.is_empty() {
None
} else {
Some(Timelocked(
timelock.0,
timelock
.1
.drain(..)
.map(|output| SpendableOutput {
global_index: index + u64::from(output.absolute.o),
output,
})
.collect(),
))
}
};
let mut res = vec![];
for tx in txs {
if let Some(timelock) = map(self.scan_transaction(&tx), index) {
res.push(timelock);
}
index += u64::try_from(
tx.prefix
.outputs
.iter()
// Filter to v2 miner TX outputs/RCT outputs since we're tracking the RCT output index
.filter(|output| {
((tx.prefix.version == 2) && matches!(tx.prefix.inputs.get(0), Some(Input::Gen(..)))) ||
output.amount.is_none()
})
.count(),
)
.unwrap()
}
Ok(res)
}
}

View File

@@ -1,271 +0,0 @@
use core::ops::Deref;
use std_shims::{
sync::OnceLock,
vec::Vec,
string::{String, ToString},
collections::HashMap,
};
use zeroize::{Zeroize, Zeroizing};
use rand_core::{RngCore, CryptoRng};
use crc::{Crc, CRC_32_ISO_HDLC};
use curve25519_dalek::scalar::Scalar;
use crate::{
random_scalar,
wallet::seed::{SeedError, Language},
};
pub(crate) const CLASSIC_SEED_LENGTH: usize = 24;
pub(crate) const CLASSIC_SEED_LENGTH_WITH_CHECKSUM: usize = 25;
fn trim(word: &str, len: usize) -> Zeroizing<String> {
Zeroizing::new(word.chars().take(len).collect())
}
struct WordList {
word_list: Vec<&'static str>,
word_map: HashMap<&'static str, usize>,
trimmed_word_map: HashMap<String, usize>,
unique_prefix_length: usize,
}
impl WordList {
fn new(word_list: Vec<&'static str>, prefix_length: usize) -> Self {
let mut lang = Self {
word_list,
word_map: HashMap::new(),
trimmed_word_map: HashMap::new(),
unique_prefix_length: prefix_length,
};
for (i, word) in lang.word_list.iter().enumerate() {
lang.word_map.insert(word, i);
lang.trimmed_word_map.insert(trim(word, lang.unique_prefix_length).deref().clone(), i);
}
lang
}
}
static LANGUAGES_CELL: OnceLock<HashMap<Language, WordList>> = OnceLock::new();
#[allow(non_snake_case)]
fn LANGUAGES() -> &'static HashMap<Language, WordList> {
LANGUAGES_CELL.get_or_init(|| {
HashMap::from([
(Language::Chinese, WordList::new(include!("./classic/zh.rs"), 1)),
(Language::English, WordList::new(include!("./classic/en.rs"), 3)),
(Language::Dutch, WordList::new(include!("./classic/nl.rs"), 4)),
(Language::French, WordList::new(include!("./classic/fr.rs"), 4)),
(Language::Spanish, WordList::new(include!("./classic/es.rs"), 4)),
(Language::German, WordList::new(include!("./classic/de.rs"), 4)),
(Language::Italian, WordList::new(include!("./classic/it.rs"), 4)),
(Language::Portuguese, WordList::new(include!("./classic/pt.rs"), 4)),
(Language::Japanese, WordList::new(include!("./classic/ja.rs"), 3)),
(Language::Russian, WordList::new(include!("./classic/ru.rs"), 4)),
(Language::Esperanto, WordList::new(include!("./classic/eo.rs"), 4)),
(Language::Lojban, WordList::new(include!("./classic/jbo.rs"), 4)),
(Language::EnglishOld, WordList::new(include!("./classic/ang.rs"), 4)),
])
})
}
#[cfg(test)]
pub(crate) fn trim_by_lang(word: &str, lang: Language) -> String {
if lang == Language::EnglishOld {
word.to_string()
} else {
word.chars().take(LANGUAGES()[&lang].unique_prefix_length).collect()
}
}
fn checksum_index(words: &[Zeroizing<String>], lang: &WordList) -> usize {
let mut trimmed_words = Zeroizing::new(String::new());
for w in words {
*trimmed_words += &trim(w, lang.unique_prefix_length);
}
let crc = Crc::<u32>::new(&CRC_32_ISO_HDLC);
let mut digest = crc.digest();
digest.update(trimmed_words.as_bytes());
usize::try_from(digest.finalize()).unwrap() % words.len()
}
// Convert a private key to a seed
fn key_to_seed(lang: Language, key: Zeroizing<Scalar>) -> ClassicSeed {
let bytes = Zeroizing::new(key.to_bytes());
// get the language words
let words = &LANGUAGES()[&lang].word_list;
let list_len = u64::try_from(words.len()).unwrap();
// To store the found words & add the checksum word later.
let mut seed = Vec::with_capacity(25);
// convert to words
// 4 bytes -> 3 words. 8 digits base 16 -> 3 digits base 1626
let mut segment = [0; 4];
let mut indices = [0; 4];
for i in 0 .. 8 {
// convert first 4 byte to u32 & get the word indices
let start = i * 4;
// convert 4 byte to u32
segment.copy_from_slice(&bytes[start .. (start + 4)]);
// Actually convert to a u64 so we can add without overflowing
indices[0] = u64::from(u32::from_le_bytes(segment));
indices[1] = indices[0];
indices[0] /= list_len;
indices[2] = indices[0] + indices[1];
indices[0] /= list_len;
indices[3] = indices[0] + indices[2];
// append words to seed
for i in indices.iter().skip(1) {
let word = usize::try_from(i % list_len).unwrap();
seed.push(Zeroizing::new(words[word].to_string()));
}
}
segment.zeroize();
indices.zeroize();
// create a checksum word for all languages except old english
if lang != Language::EnglishOld {
let checksum = seed[checksum_index(&seed, &LANGUAGES()[&lang])].clone();
seed.push(checksum);
}
let mut res = Zeroizing::new(String::new());
for (i, word) in seed.iter().enumerate() {
if i != 0 {
*res += " ";
}
*res += word;
}
ClassicSeed(res)
}
// Convert a seed to bytes
pub(crate) fn seed_to_bytes(words: &str) -> Result<(Language, Zeroizing<[u8; 32]>), SeedError> {
// get seed words
let words = words.split_whitespace().map(|w| Zeroizing::new(w.to_string())).collect::<Vec<_>>();
if (words.len() != CLASSIC_SEED_LENGTH) && (words.len() != CLASSIC_SEED_LENGTH_WITH_CHECKSUM) {
panic!("invalid seed passed to seed_to_bytes");
}
// find the language
let (matched_indices, lang_name, lang) = (|| {
let has_checksum = words.len() == CLASSIC_SEED_LENGTH_WITH_CHECKSUM;
let mut matched_indices = Zeroizing::new(vec![]);
// Iterate through all the languages
'language: for (lang_name, lang) in LANGUAGES().iter() {
matched_indices.zeroize();
matched_indices.clear();
// Iterate through all the words and see if they're all present
for word in &words {
let trimmed = trim(word, lang.unique_prefix_length);
let word = if has_checksum { &trimmed } else { word };
if let Some(index) = if has_checksum {
lang.trimmed_word_map.get(word.deref())
} else {
lang.word_map.get(&word.as_str())
} {
matched_indices.push(*index);
} else {
continue 'language;
}
}
if has_checksum {
if lang_name == &Language::EnglishOld {
Err(SeedError::EnglishOldWithChecksum)?;
}
// exclude the last word when calculating a checksum.
let last_word = words.last().unwrap().clone();
let checksum = words[checksum_index(&words[.. words.len() - 1], lang)].clone();
// check the trimmed checksum and trimmed last word line up
if trim(&checksum, lang.unique_prefix_length) != trim(&last_word, lang.unique_prefix_length)
{
Err(SeedError::InvalidChecksum)?;
}
}
return Ok((matched_indices, lang_name, lang));
}
Err(SeedError::UnknownLanguage)?
})()?;
// convert to bytes
let mut res = Zeroizing::new([0; 32]);
let mut indices = Zeroizing::new([0; 4]);
for i in 0 .. 8 {
// read 3 indices at a time
let i3 = i * 3;
indices[1] = matched_indices[i3];
indices[2] = matched_indices[i3 + 1];
indices[3] = matched_indices[i3 + 2];
let inner = |i| {
let mut base = (lang.word_list.len() - indices[i] + indices[i + 1]) % lang.word_list.len();
// Shift the index over
for _ in 0 .. i {
base *= lang.word_list.len();
}
base
};
// set the last index
indices[0] = indices[1] + inner(1) + inner(2);
if (indices[0] % lang.word_list.len()) != indices[1] {
Err(SeedError::InvalidSeed)?;
}
let pos = i * 4;
let mut bytes = u32::try_from(indices[0]).unwrap().to_le_bytes();
res[pos .. (pos + 4)].copy_from_slice(&bytes);
bytes.zeroize();
}
Ok((*lang_name, res))
}
#[derive(Clone, PartialEq, Eq, Zeroize)]
pub struct ClassicSeed(Zeroizing<String>);
impl ClassicSeed {
pub(crate) fn new<R: RngCore + CryptoRng>(rng: &mut R, lang: Language) -> Self {
key_to_seed(lang, Zeroizing::new(random_scalar(rng)))
}
pub fn from_string(words: Zeroizing<String>) -> Result<Self, SeedError> {
let (lang, entropy) = seed_to_bytes(&words)?;
// Make sure this is a valid scalar
let mut scalar = Scalar::from_canonical_bytes(*entropy);
if scalar.is_none() {
Err(SeedError::InvalidSeed)?;
}
scalar.zeroize();
// Call from_entropy so a trimmed seed becomes a full seed
Ok(Self::from_entropy(lang, entropy).unwrap())
}
pub fn from_entropy(lang: Language, entropy: Zeroizing<[u8; 32]>) -> Option<Self> {
Scalar::from_canonical_bytes(*entropy).map(|scalar| key_to_seed(lang, Zeroizing::new(scalar)))
}
pub(crate) fn to_string(&self) -> Zeroizing<String> {
self.0.clone()
}
pub(crate) fn entropy(&self) -> Zeroizing<[u8; 32]> {
seed_to_bytes(&self.0).unwrap().1
}
}

View File

@@ -1,92 +0,0 @@
use core::fmt;
use std_shims::string::String;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use rand_core::{RngCore, CryptoRng};
pub(crate) mod classic;
use classic::{CLASSIC_SEED_LENGTH, CLASSIC_SEED_LENGTH_WITH_CHECKSUM, ClassicSeed};
/// Error when decoding a seed.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum SeedError {
#[cfg_attr(feature = "std", error("invalid number of words in seed"))]
InvalidSeedLength,
#[cfg_attr(feature = "std", error("unknown language"))]
UnknownLanguage,
#[cfg_attr(feature = "std", error("invalid checksum"))]
InvalidChecksum,
#[cfg_attr(feature = "std", error("english old seeds don't support checksums"))]
EnglishOldWithChecksum,
#[cfg_attr(feature = "std", error("invalid seed"))]
InvalidSeed,
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Hash)]
pub enum Language {
Chinese,
English,
Dutch,
French,
Spanish,
German,
Italian,
Portuguese,
Japanese,
Russian,
Esperanto,
Lojban,
EnglishOld,
}
/// A Monero seed.
// TODO: Add polyseed to enum
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub enum Seed {
Classic(ClassicSeed),
}
impl fmt::Debug for Seed {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Self::Classic(_) => f.debug_struct("Seed::Classic").finish_non_exhaustive(),
}
}
}
impl Seed {
/// Create a new seed.
pub fn new<R: RngCore + CryptoRng>(rng: &mut R, lang: Language) -> Self {
Self::Classic(ClassicSeed::new(rng, lang))
}
/// Parse a seed from a String.
pub fn from_string(words: Zeroizing<String>) -> Result<Self, SeedError> {
match words.split_whitespace().count() {
CLASSIC_SEED_LENGTH | CLASSIC_SEED_LENGTH_WITH_CHECKSUM => {
ClassicSeed::from_string(words).map(Self::Classic)
}
_ => Err(SeedError::InvalidSeedLength)?,
}
}
/// Create a Seed from entropy.
pub fn from_entropy(lang: Language, entropy: Zeroizing<[u8; 32]>) -> Option<Self> {
ClassicSeed::from_entropy(lang, entropy).map(Self::Classic)
}
/// Convert a seed to a String.
pub fn to_string(&self) -> Zeroizing<String> {
match self {
Self::Classic(seed) => seed.to_string(),
}
}
/// Return the entropy for this seed.
pub fn entropy(&self) -> Zeroizing<[u8; 32]> {
match self {
Self::Classic(seed) => seed.entropy(),
}
}
}

View File

@@ -1,145 +0,0 @@
use std::sync::{Arc, RwLock};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use crate::{
Protocol,
wallet::{
address::MoneroAddress, Fee, SpendableOutput, Change, SignableTransaction, TransactionError,
extra::MAX_ARBITRARY_DATA_SIZE,
},
};
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
struct SignableTransactionBuilderInternal {
protocol: Protocol,
fee: Fee,
r_seed: Option<Zeroizing<[u8; 32]>>,
inputs: Vec<SpendableOutput>,
payments: Vec<(MoneroAddress, u64)>,
change_address: Option<Change>,
data: Vec<Vec<u8>>,
}
impl SignableTransactionBuilderInternal {
// Takes in the change address so users don't miss that they have to manually set one
// If they don't, all leftover funds will become part of the fee
fn new(protocol: Protocol, fee: Fee, change_address: Option<Change>) -> Self {
Self {
protocol,
fee,
r_seed: None,
inputs: vec![],
payments: vec![],
change_address,
data: vec![],
}
}
fn set_r_seed(&mut self, r_seed: Zeroizing<[u8; 32]>) {
self.r_seed = Some(r_seed);
}
fn add_input(&mut self, input: SpendableOutput) {
self.inputs.push(input);
}
fn add_inputs(&mut self, inputs: &[SpendableOutput]) {
self.inputs.extend(inputs.iter().cloned());
}
fn add_payment(&mut self, dest: MoneroAddress, amount: u64) {
self.payments.push((dest, amount));
}
fn add_payments(&mut self, payments: &[(MoneroAddress, u64)]) {
self.payments.extend(payments);
}
fn add_data(&mut self, data: Vec<u8>) {
self.data.push(data);
}
}
/// A Transaction Builder for Monero transactions.
/// All methods provided will modify self while also returning a shallow copy, enabling efficient
/// chaining with a clean API.
/// In order to fork the builder at some point, clone will still return a deep copy.
#[derive(Debug)]
pub struct SignableTransactionBuilder(Arc<RwLock<SignableTransactionBuilderInternal>>);
impl Clone for SignableTransactionBuilder {
fn clone(&self) -> Self {
Self(Arc::new(RwLock::new((*self.0.read().unwrap()).clone())))
}
}
impl PartialEq for SignableTransactionBuilder {
fn eq(&self, other: &Self) -> bool {
*self.0.read().unwrap() == *other.0.read().unwrap()
}
}
impl Eq for SignableTransactionBuilder {}
impl Zeroize for SignableTransactionBuilder {
fn zeroize(&mut self) {
self.0.write().unwrap().zeroize();
}
}
#[allow(clippy::return_self_not_must_use)]
impl SignableTransactionBuilder {
fn shallow_copy(&self) -> Self {
Self(self.0.clone())
}
pub fn new(protocol: Protocol, fee: Fee, change_address: Option<Change>) -> Self {
Self(Arc::new(RwLock::new(SignableTransactionBuilderInternal::new(
protocol,
fee,
change_address,
))))
}
pub fn set_r_seed(&mut self, r_seed: Zeroizing<[u8; 32]>) -> Self {
self.0.write().unwrap().set_r_seed(r_seed);
self.shallow_copy()
}
pub fn add_input(&mut self, input: SpendableOutput) -> Self {
self.0.write().unwrap().add_input(input);
self.shallow_copy()
}
pub fn add_inputs(&mut self, inputs: &[SpendableOutput]) -> Self {
self.0.write().unwrap().add_inputs(inputs);
self.shallow_copy()
}
pub fn add_payment(&mut self, dest: MoneroAddress, amount: u64) -> Self {
self.0.write().unwrap().add_payment(dest, amount);
self.shallow_copy()
}
pub fn add_payments(&mut self, payments: &[(MoneroAddress, u64)]) -> Self {
self.0.write().unwrap().add_payments(payments);
self.shallow_copy()
}
pub fn add_data(&mut self, data: Vec<u8>) -> Result<Self, TransactionError> {
if data.len() > MAX_ARBITRARY_DATA_SIZE {
Err(TransactionError::TooMuchData)?;
}
self.0.write().unwrap().add_data(data);
Ok(self.shallow_copy())
}
pub fn build(self) -> Result<SignableTransaction, TransactionError> {
let read = self.0.read().unwrap();
SignableTransaction::new(
read.protocol,
read.r_seed.clone(),
read.inputs.clone(),
read.payments.clone(),
read.change_address.clone(),
read.data.clone(),
read.fee,
)
}
}

View File

@@ -1,854 +0,0 @@
use core::{ops::Deref, fmt};
use std_shims::{
vec::Vec,
io,
string::{String, ToString},
};
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use rand::seq::SliceRandom;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use group::Group;
use curve25519_dalek::{
constants::{ED25519_BASEPOINT_POINT, ED25519_BASEPOINT_TABLE},
scalar::Scalar,
edwards::EdwardsPoint,
};
use dalek_ff_group as dfg;
#[cfg(feature = "multisig")]
use frost::FrostError;
use crate::{
Protocol, Commitment, hash, random_scalar,
serialize::{
read_byte, read_bytes, read_u64, read_scalar, read_point, read_vec, write_byte, write_scalar,
write_point, write_raw_vec, write_vec,
},
ringct::{
generate_key_image,
clsag::{ClsagError, ClsagInput, Clsag},
bulletproofs::{MAX_OUTPUTS, Bulletproofs},
RctBase, RctPrunable, RctSignatures,
},
transaction::{Input, Output, Timelock, TransactionPrefix, Transaction},
rpc::{RpcError, RpcConnection, Rpc},
wallet::{
address::{Network, AddressSpec, MoneroAddress},
ViewPair, SpendableOutput, Decoys, PaymentId, ExtraField, Extra, key_image_sort, uniqueness,
shared_key, commitment_mask, amount_encryption,
extra::{ARBITRARY_DATA_MARKER, MAX_ARBITRARY_DATA_SIZE},
},
};
#[cfg(feature = "std")]
mod builder;
#[cfg(feature = "std")]
pub use builder::SignableTransactionBuilder;
#[cfg(feature = "multisig")]
mod multisig;
#[cfg(feature = "multisig")]
pub use multisig::TransactionMachine;
use crate::ringct::EncryptedAmount;
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
struct SendOutput {
R: EdwardsPoint,
view_tag: u8,
dest: EdwardsPoint,
commitment: Commitment,
amount: [u8; 8],
}
impl SendOutput {
#[allow(non_snake_case)]
fn internal(
unique: [u8; 32],
output: (usize, (MoneroAddress, u64)),
ecdh: EdwardsPoint,
R: EdwardsPoint,
) -> (Self, Option<[u8; 8]>) {
let o = output.0;
let output = output.1;
let (view_tag, shared_key, payment_id_xor) =
shared_key(Some(unique).filter(|_| output.0.is_guaranteed()), ecdh, o);
(
Self {
R,
view_tag,
dest: ((&shared_key * &ED25519_BASEPOINT_TABLE) + output.0.spend),
commitment: Commitment::new(commitment_mask(shared_key), output.1),
amount: amount_encryption(output.1, shared_key),
},
output
.0
.payment_id()
.map(|id| (u64::from_le_bytes(id) ^ u64::from_le_bytes(payment_id_xor)).to_le_bytes()),
)
}
fn new(
r: &Zeroizing<Scalar>,
unique: [u8; 32],
output: (usize, (MoneroAddress, u64)),
) -> (Self, Option<[u8; 8]>) {
let address = output.1 .0;
Self::internal(
unique,
output,
r.deref() * address.view,
if address.is_subaddress() {
r.deref() * address.spend
} else {
r.deref() * &ED25519_BASEPOINT_TABLE
},
)
}
fn change(
ecdh: EdwardsPoint,
unique: [u8; 32],
output: (usize, (MoneroAddress, u64)),
) -> (Self, Option<[u8; 8]>) {
Self::internal(unique, output, ecdh, ED25519_BASEPOINT_POINT)
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum TransactionError {
#[cfg_attr(feature = "std", error("multiple addresses with payment IDs"))]
MultiplePaymentIds,
#[cfg_attr(feature = "std", error("no inputs"))]
NoInputs,
#[cfg_attr(feature = "std", error("no outputs"))]
NoOutputs,
#[cfg_attr(feature = "std", error("only one output and no change address"))]
NoChange,
#[cfg_attr(feature = "std", error("too many outputs"))]
TooManyOutputs,
#[cfg_attr(feature = "std", error("too much data"))]
TooMuchData,
#[cfg_attr(feature = "std", error("too many inputs/too much arbitrary data"))]
TooLargeTransaction,
#[cfg_attr(feature = "std", error("not enough funds (in {0}, out {1})"))]
NotEnoughFunds(u64, u64),
#[cfg_attr(feature = "std", error("wrong spend private key"))]
WrongPrivateKey,
#[cfg_attr(feature = "std", error("rpc error ({0})"))]
RpcError(RpcError),
#[cfg_attr(feature = "std", error("clsag error ({0})"))]
ClsagError(ClsagError),
#[cfg_attr(feature = "std", error("invalid transaction ({0})"))]
InvalidTransaction(RpcError),
#[cfg(feature = "multisig")]
#[cfg_attr(feature = "std", error("frost error {0}"))]
FrostError(FrostError),
}
async fn prepare_inputs<R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc<RPC>,
ring_len: usize,
inputs: &[SpendableOutput],
spend: &Zeroizing<Scalar>,
tx: &mut Transaction,
) -> Result<Vec<(Zeroizing<Scalar>, EdwardsPoint, ClsagInput)>, TransactionError> {
let mut signable = Vec::with_capacity(inputs.len());
// Select decoys
let decoys = Decoys::select(
rng,
rpc,
ring_len,
rpc.get_height().await.map_err(TransactionError::RpcError)? - 1,
inputs,
)
.await
.map_err(TransactionError::RpcError)?;
for (i, input) in inputs.iter().enumerate() {
let input_spend = Zeroizing::new(input.key_offset() + spend.deref());
let image = generate_key_image(&input_spend);
signable.push((
input_spend,
image,
ClsagInput::new(input.commitment().clone(), decoys[i].clone())
.map_err(TransactionError::ClsagError)?,
));
tx.prefix.inputs.push(Input::ToKey {
amount: None,
key_offsets: decoys[i].offsets.clone(),
key_image: signable[i].1,
});
}
signable.sort_by(|x, y| x.1.compress().to_bytes().cmp(&y.1.compress().to_bytes()).reverse());
tx.prefix.inputs.sort_by(|x, y| {
if let (Input::ToKey { key_image: x, .. }, Input::ToKey { key_image: y, .. }) = (x, y) {
x.compress().to_bytes().cmp(&y.compress().to_bytes()).reverse()
} else {
panic!("Input wasn't ToKey")
}
});
Ok(signable)
}
/// Fee struct, defined as a per-unit cost and a mask for rounding purposes.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct Fee {
pub per_weight: u64,
pub mask: u64,
}
impl Fee {
pub fn calculate(&self, weight: usize) -> u64 {
((((self.per_weight * u64::try_from(weight).unwrap()) - 1) / self.mask) + 1) * self.mask
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) enum InternalPayment {
Payment((MoneroAddress, u64)),
Change(Change, u64),
}
/// The eventual output of a SignableTransaction.
///
/// If the SignableTransaction has a Change with a view key, this will also have the view key.
/// Accordingly, it must be treated securely.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Eventuality {
protocol: Protocol,
r_seed: Zeroizing<[u8; 32]>,
inputs: Vec<EdwardsPoint>,
payments: Vec<InternalPayment>,
extra: Vec<u8>,
}
/// A signable transaction, either in a single-signer or multisig context.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct SignableTransaction {
protocol: Protocol,
r_seed: Option<Zeroizing<[u8; 32]>>,
inputs: Vec<SpendableOutput>,
payments: Vec<InternalPayment>,
data: Vec<Vec<u8>>,
fee: u64,
}
/// Specification for a change output.
#[derive(Clone, PartialEq, Eq, Zeroize)]
pub struct Change {
address: MoneroAddress,
view: Option<Zeroizing<Scalar>>,
}
impl fmt::Debug for Change {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_struct("Change").field("address", &self.address).finish_non_exhaustive()
}
}
impl Change {
/// Create a change output specification from a ViewPair, as needed to maintain privacy.
pub fn new(view: &ViewPair, guaranteed: bool) -> Self {
Self {
address: view.address(
Network::Mainnet,
if !guaranteed {
AddressSpec::Standard
} else {
AddressSpec::Featured { subaddress: None, payment_id: None, guaranteed: true }
},
),
view: Some(view.view.clone()),
}
}
/// Create a fingerprintable change output specification which will harm privacy.
///
/// Only use this if you know what you're doing.
pub const fn fingerprintable(address: MoneroAddress) -> Self {
Self { address, view: None }
}
}
impl SignableTransaction {
/// Create a signable transaction.
///
/// `r_seed` refers to a seed used to derive the transaction's ephemeral keys (colloquially
/// called Rs). If None is provided, one will be automatically generated.
///
/// Up to 16 outputs may be present, including the change output. If the change address is
/// specified, leftover funds will be sent to it.
///
/// Each chunk of data must not exceed MAX_ARBITRARY_DATA_SIZE and will be embedded in TX extra.
pub fn new(
protocol: Protocol,
r_seed: Option<Zeroizing<[u8; 32]>>,
inputs: Vec<SpendableOutput>,
payments: Vec<(MoneroAddress, u64)>,
change_address: Option<Change>,
data: Vec<Vec<u8>>,
fee_rate: Fee,
) -> Result<Self, TransactionError> {
// Make sure there's only one payment ID
let mut has_payment_id = {
let mut payment_ids = 0;
let mut count = |addr: MoneroAddress| {
if addr.payment_id().is_some() {
payment_ids += 1;
}
};
for payment in &payments {
count(payment.0);
}
if let Some(change) = change_address.as_ref() {
count(change.address);
}
if payment_ids > 1 {
Err(TransactionError::MultiplePaymentIds)?;
}
payment_ids == 1
};
if inputs.is_empty() {
Err(TransactionError::NoInputs)?;
}
if payments.is_empty() {
Err(TransactionError::NoOutputs)?;
}
for part in &data {
if part.len() > MAX_ARBITRARY_DATA_SIZE {
Err(TransactionError::TooMuchData)?;
}
}
// If we don't have two outputs, as required by Monero, error
if (payments.len() == 1) && change_address.is_none() {
Err(TransactionError::NoChange)?;
}
let outputs = payments.len() + usize::from(change_address.is_some());
// Add a dummy payment ID if there's only 2 payments
has_payment_id |= outputs == 2;
// Calculate the extra length
// Assume additional keys are needed in order to cause a worst-case estimation
let extra = Extra::fee_weight(outputs, true, has_payment_id, data.as_ref());
// https://github.com/monero-project/monero/pull/8733
const MAX_EXTRA_SIZE: usize = 1060;
if extra > MAX_EXTRA_SIZE {
Err(TransactionError::TooMuchData)?;
}
// This is a extremely heavy fee weight estimation which can only be trusted for two things
// 1) Ensuring we have enough for whatever fee we end up using
// 2) Ensuring we aren't over the max size
let estimated_tx_size = Transaction::fee_weight(protocol, inputs.len(), outputs, extra);
// The actual limit is half the block size, and for the minimum block size of 300k, that'd be
// 150k
// wallet2 will only create transactions up to 100k bytes however
const MAX_TX_SIZE: usize = 100_000;
// This uses the weight (estimated_tx_size) despite the BP clawback
// The clawback *increases* the weight, so this will over-estimate, yet it's still safe
if estimated_tx_size >= MAX_TX_SIZE {
Err(TransactionError::TooLargeTransaction)?;
}
// Calculate the fee.
let fee = fee_rate.calculate(estimated_tx_size);
// Make sure we have enough funds
let in_amount = inputs.iter().map(|input| input.commitment().amount).sum::<u64>();
let out_amount = payments.iter().map(|payment| payment.1).sum::<u64>() + fee;
if in_amount < out_amount {
Err(TransactionError::NotEnoughFunds(in_amount, out_amount))?;
}
if outputs > MAX_OUTPUTS {
Err(TransactionError::TooManyOutputs)?;
}
let mut payments = payments.into_iter().map(InternalPayment::Payment).collect::<Vec<_>>();
if let Some(change) = change_address {
payments.push(InternalPayment::Change(change, in_amount - out_amount));
}
Ok(Self { protocol, r_seed, inputs, payments, data, fee })
}
pub fn fee(&self) -> u64 {
self.fee
}
#[allow(clippy::type_complexity)]
fn prepare_payments(
seed: &Zeroizing<[u8; 32]>,
inputs: &[EdwardsPoint],
payments: &mut Vec<InternalPayment>,
uniqueness: [u8; 32],
) -> (EdwardsPoint, Vec<Zeroizing<Scalar>>, Vec<SendOutput>, Option<[u8; 8]>) {
let mut rng = {
// Hash the inputs into the seed so we don't re-use Rs
// Doesn't re-use uniqueness as that's based on key images, which requires interactivity
// to generate. The output keys do not
// This remains private so long as the seed is private
let mut r_uniqueness = vec![];
for input in inputs {
r_uniqueness.extend(input.compress().to_bytes());
}
ChaCha20Rng::from_seed(hash(
&[b"monero-serai_outputs".as_ref(), seed.as_ref(), &r_uniqueness].concat(),
))
};
// Shuffle the payments
payments.shuffle(&mut rng);
// Used for all non-subaddress outputs, or if there's only one subaddress output and a change
let tx_key = Zeroizing::new(random_scalar(&mut rng));
let mut tx_public_key = tx_key.deref() * &ED25519_BASEPOINT_TABLE;
// If any of these outputs are to a subaddress, we need keys distinct to them
// The only time this *does not* force having additional keys is when the only other output
// is a change output we have the view key for, enabling rewriting rA to aR
let mut has_change_view = false;
let subaddresses = payments
.iter()
.filter(|payment| match *payment {
InternalPayment::Payment(payment) => payment.0.is_subaddress(),
InternalPayment::Change(change, _) => {
if change.view.is_some() {
has_change_view = true;
// It should not be possible to construct a change specification to a subaddress with a
// view key
debug_assert!(!change.address.is_subaddress());
}
change.address.is_subaddress()
}
})
.count() !=
0;
// We need additional keys if we have any subaddresses
// UNLESS there's only two payments and we have the view-key for the change output
let additional = if (payments.len() == 2) && has_change_view { false } else { subaddresses };
let modified_change_ecdh = subaddresses && (!additional);
// If we're using the aR rewrite, update tx_public_key from rG to rB
if modified_change_ecdh {
for payment in &*payments {
match payment {
InternalPayment::Payment(payment) => {
// This should be the only payment and it should be a subaddress
debug_assert!(payment.0.is_subaddress());
tx_public_key = tx_key.deref() * payment.0.spend;
}
InternalPayment::Change(_, _) => {}
}
}
debug_assert!(tx_public_key != (tx_key.deref() * &ED25519_BASEPOINT_TABLE));
}
// Actually create the outputs
let mut additional_keys = vec![];
let mut outputs = Vec::with_capacity(payments.len());
let mut id = None;
for (o, mut payment) in payments.drain(..).enumerate() {
// Downcast the change output to a payment output if it doesn't require special handling
// regarding it's view key
payment = if !modified_change_ecdh {
if let InternalPayment::Change(change, amount) = &payment {
InternalPayment::Payment((change.address, *amount))
} else {
payment
}
} else {
payment
};
let (output, payment_id) = match payment {
InternalPayment::Payment(payment) => {
// If this is a subaddress, generate a dedicated r. Else, reuse the TX key
let dedicated = Zeroizing::new(random_scalar(&mut rng));
let use_dedicated = additional && payment.0.is_subaddress();
let r = if use_dedicated { &dedicated } else { &tx_key };
let (mut output, payment_id) = SendOutput::new(r, uniqueness, (o, payment));
if modified_change_ecdh {
debug_assert_eq!(tx_public_key, output.R);
}
if use_dedicated {
additional_keys.push(dedicated);
} else {
// If this used tx_key, randomize its R
// This is so when extra is created, there's a distinct R for it to use
output.R = dfg::EdwardsPoint::random(&mut rng).0;
}
(output, payment_id)
}
InternalPayment::Change(change, amount) => {
// Instead of rA, use Ra, where R is r * subaddress_spend_key
// change.view must be Some as if it's None, this payment would've been downcast
let ecdh = tx_public_key * change.view.unwrap().deref();
SendOutput::change(ecdh, uniqueness, (o, (change.address, amount)))
}
};
outputs.push(output);
id = id.or(payment_id);
}
// Include a random payment ID if we don't actually have one
// It prevents transactions from leaking if they're sending to integrated addresses or not
// Only do this if we only have two outputs though, as Monero won't add a dummy if there's
// more than two outputs
if outputs.len() <= 2 {
let mut rand = [0; 8];
rng.fill_bytes(&mut rand);
id = id.or(Some(rand));
}
(tx_public_key, additional_keys, outputs, id)
}
#[allow(non_snake_case)]
fn extra(
tx_key: EdwardsPoint,
additional: bool,
Rs: Vec<EdwardsPoint>,
id: Option<[u8; 8]>,
data: &mut Vec<Vec<u8>>,
) -> Vec<u8> {
#[allow(non_snake_case)]
let Rs_len = Rs.len();
let mut extra = Extra::new(tx_key, if additional { Rs } else { vec![] });
if let Some(id) = id {
let mut id_vec = Vec::with_capacity(1 + 8);
PaymentId::Encrypted(id).write(&mut id_vec).unwrap();
extra.push(ExtraField::Nonce(id_vec));
}
// Include data if present
let extra_len = Extra::fee_weight(Rs_len, additional, id.is_some(), data.as_ref());
for part in data.drain(..) {
let mut arb = vec![ARBITRARY_DATA_MARKER];
arb.extend(part);
extra.push(ExtraField::Nonce(arb));
}
let mut serialized = Vec::with_capacity(extra_len);
extra.write(&mut serialized).unwrap();
debug_assert_eq!(extra_len, serialized.len());
serialized
}
/// Returns the eventuality of this transaction.
///
/// The eventuality is defined as the TX extra/outputs this transaction will create, if signed
/// with the specified seed. This eventuality can be compared to on-chain transactions to see
/// if the transaction has already been signed and published.
pub fn eventuality(&self) -> Option<Eventuality> {
let inputs = self.inputs.iter().map(SpendableOutput::key).collect::<Vec<_>>();
let (tx_key, additional, outputs, id) = Self::prepare_payments(
self.r_seed.as_ref()?,
&inputs,
&mut self.payments.clone(),
// Lie about the uniqueness, used when determining output keys/commitments yet not the
// ephemeral keys, which is want we want here
// While we do still grab the outputs variable, it's so we can get its Rs
[0; 32],
);
#[allow(non_snake_case)]
let Rs = outputs.iter().map(|output| output.R).collect();
drop(outputs);
let additional = !additional.is_empty();
let extra = Self::extra(tx_key, additional, Rs, id, &mut self.data.clone());
Some(Eventuality {
protocol: self.protocol,
r_seed: self.r_seed.clone()?,
inputs,
payments: self.payments.clone(),
extra,
})
}
fn prepare_transaction<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
uniqueness: [u8; 32],
) -> (Transaction, Scalar) {
// If no seed for the ephemeral keys was provided, make one
let r_seed = self.r_seed.clone().unwrap_or_else(|| {
let mut res = Zeroizing::new([0; 32]);
rng.fill_bytes(res.as_mut());
res
});
let (tx_key, additional, outputs, id) = Self::prepare_payments(
&r_seed,
&self.inputs.iter().map(SpendableOutput::key).collect::<Vec<_>>(),
&mut self.payments,
uniqueness,
);
// This function only cares if additional keys were necessary, not what they were
let additional = !additional.is_empty();
let commitments = outputs.iter().map(|output| output.commitment.clone()).collect::<Vec<_>>();
let sum = commitments.iter().map(|commitment| commitment.mask).sum();
// Safe due to the constructor checking MAX_OUTPUTS
let bp = Bulletproofs::prove(rng, &commitments, self.protocol.bp_plus()).unwrap();
// Create the TX extra
let extra = Self::extra(
tx_key,
additional,
outputs.iter().map(|output| output.R).collect(),
id,
&mut self.data,
);
let mut fee = self.inputs.iter().map(|input| input.commitment().amount).sum::<u64>();
let mut tx_outputs = Vec::with_capacity(outputs.len());
let mut encrypted_amounts = Vec::with_capacity(outputs.len());
for output in &outputs {
fee -= output.commitment.amount;
tx_outputs.push(Output {
amount: None,
key: output.dest.compress(),
view_tag: Some(output.view_tag).filter(|_| matches!(self.protocol, Protocol::v16)),
});
encrypted_amounts.push(EncryptedAmount::Compact { amount: output.amount });
}
(
Transaction {
prefix: TransactionPrefix {
version: 2,
timelock: Timelock::None,
inputs: vec![],
outputs: tx_outputs,
extra,
},
signatures: vec![],
rct_signatures: RctSignatures {
base: RctBase {
fee,
encrypted_amounts,
pseudo_outs: vec![],
commitments: commitments.iter().map(Commitment::calculate).collect(),
},
prunable: RctPrunable::Clsag { bulletproofs: bp, clsags: vec![], pseudo_outs: vec![] },
},
},
sum,
)
}
/// Sign this transaction.
pub async fn sign<R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
mut self,
rng: &mut R,
rpc: &Rpc<RPC>,
spend: &Zeroizing<Scalar>,
) -> Result<Transaction, TransactionError> {
let mut images = Vec::with_capacity(self.inputs.len());
for input in &self.inputs {
let mut offset = Zeroizing::new(spend.deref() + input.key_offset());
if (offset.deref() * &ED25519_BASEPOINT_TABLE) != input.key() {
Err(TransactionError::WrongPrivateKey)?;
}
images.push(generate_key_image(&offset));
offset.zeroize();
}
images.sort_by(key_image_sort);
let (mut tx, mask_sum) = self.prepare_transaction(
rng,
uniqueness(
&images
.iter()
.map(|image| Input::ToKey { amount: None, key_offsets: vec![], key_image: *image })
.collect::<Vec<_>>(),
),
);
let signable =
prepare_inputs(rng, rpc, self.protocol.ring_len(), &self.inputs, spend, &mut tx).await?;
let clsag_pairs = Clsag::sign(rng, signable, mask_sum, tx.signature_hash());
match tx.rct_signatures.prunable {
RctPrunable::Null => panic!("Signing for RctPrunable::Null"),
RctPrunable::Clsag { ref mut clsags, ref mut pseudo_outs, .. } => {
clsags.append(&mut clsag_pairs.iter().map(|clsag| clsag.0.clone()).collect::<Vec<_>>());
pseudo_outs.append(&mut clsag_pairs.iter().map(|clsag| clsag.1).collect::<Vec<_>>());
}
RctPrunable::MlsagBorromean { .. } | RctPrunable::MlsagBulletproofs { .. } => {
unreachable!("attempted to sign a TX which wasn't CLSAG")
}
}
Ok(tx)
}
}
impl Eventuality {
/// Enables building a HashMap of Extra -> Eventuality for efficiently checking if an on-chain
/// transaction may match this eventuality.
///
/// This extra is cryptographically bound to:
/// 1) A specific set of inputs (via their output key)
/// 2) A specific seed for the ephemeral keys
///
/// This extra may be used with a transaction with a distinct set of inputs, yet no honest
/// transaction which doesn't satisfy this Eventuality will contain it.
pub fn extra(&self) -> &[u8] {
&self.extra
}
#[must_use]
pub fn matches(&self, tx: &Transaction) -> bool {
if self.payments.len() != tx.prefix.outputs.len() {
return false;
}
// Verify extra.
// Even if all the outputs were correct, a malicious extra could still cause a recipient to
// fail to receive their funds.
// This is the cheapest check available to perform as it does not require TX-specific ECC ops.
if self.extra != tx.prefix.extra {
return false;
}
// Also ensure no timelock was set.
if tx.prefix.timelock != Timelock::None {
return false;
}
// Generate the outputs. This is TX-specific due to uniqueness.
let (_, _, outputs, _) = SignableTransaction::prepare_payments(
&self.r_seed,
&self.inputs,
&mut self.payments.clone(),
uniqueness(&tx.prefix.inputs),
);
let rct_type = tx.rct_signatures.rct_type();
if rct_type != self.protocol.optimal_rct_type() {
return false;
}
// TODO: Remove this when the following for loop is updated
assert!(
rct_type.compact_encrypted_amounts(),
"created an Eventuality for a very old RctType we don't support proving for"
);
for (o, (expected, actual)) in outputs.iter().zip(tx.prefix.outputs.iter()).enumerate() {
// Verify the output, commitment, and encrypted amount.
if (&Output {
amount: None,
key: expected.dest.compress(),
view_tag: Some(expected.view_tag).filter(|_| matches!(self.protocol, Protocol::v16)),
} != actual) ||
(Some(&expected.commitment.calculate()) != tx.rct_signatures.base.commitments.get(o)) ||
(Some(&EncryptedAmount::Compact { amount: expected.amount }) !=
tx.rct_signatures.base.encrypted_amounts.get(o))
{
return false;
}
}
true
}
pub fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
self.protocol.write(w)?;
write_raw_vec(write_byte, self.r_seed.as_ref(), w)?;
write_vec(write_point, &self.inputs, w)?;
fn write_payment<W: io::Write>(payment: &InternalPayment, w: &mut W) -> io::Result<()> {
match payment {
InternalPayment::Payment(payment) => {
w.write_all(&[0])?;
write_vec(write_byte, payment.0.to_string().as_bytes(), w)?;
w.write_all(&payment.1.to_le_bytes())
}
InternalPayment::Change(change, amount) => {
w.write_all(&[1])?;
write_vec(write_byte, change.address.to_string().as_bytes(), w)?;
if let Some(view) = change.view.as_ref() {
w.write_all(&[1])?;
write_scalar(view, w)?;
} else {
w.write_all(&[0])?;
}
w.write_all(&amount.to_le_bytes())
}
}
}
write_vec(write_payment, &self.payments, w)?;
write_vec(write_byte, &self.extra, w)
}
pub fn serialize(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(128);
self.write(&mut buf).unwrap();
buf
}
pub fn read<R: io::Read>(r: &mut R) -> io::Result<Self> {
fn read_address<R: io::Read>(r: &mut R) -> io::Result<MoneroAddress> {
String::from_utf8(read_vec(read_byte, r)?)
.ok()
.and_then(|str| MoneroAddress::from_str_raw(&str).ok())
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid address"))
}
fn read_payment<R: io::Read>(r: &mut R) -> io::Result<InternalPayment> {
Ok(match read_byte(r)? {
0 => InternalPayment::Payment((read_address(r)?, read_u64(r)?)),
1 => InternalPayment::Change(
Change {
address: read_address(r)?,
view: match read_byte(r)? {
0 => None,
1 => Some(Zeroizing::new(read_scalar(r)?)),
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid change payment"))?,
},
},
read_u64(r)?,
),
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid payment"))?,
})
}
Ok(Self {
protocol: Protocol::read(r)?,
r_seed: Zeroizing::new(read_bytes::<_, 32>(r)?),
inputs: read_vec(read_point, r)?,
payments: read_vec(read_payment, r)?,
extra: read_vec(read_byte, r)?,
})
}
}

View File

@@ -1,442 +0,0 @@
use std_shims::{
sync::Arc,
vec::Vec,
io::{self, Read},
collections::HashMap,
};
use std::sync::RwLock;
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use group::ff::Field;
use curve25519_dalek::{traits::Identity, scalar::Scalar, edwards::EdwardsPoint};
use dalek_ff_group as dfg;
use transcript::{Transcript, RecommendedTranscript};
use frost::{
curve::Ed25519,
Participant, FrostError, ThresholdKeys,
sign::{
Writable, Preprocess, CachedPreprocess, SignatureShare, PreprocessMachine, SignMachine,
SignatureMachine, AlgorithmMachine, AlgorithmSignMachine, AlgorithmSignatureMachine,
},
};
use crate::{
random_scalar,
ringct::{
clsag::{ClsagInput, ClsagDetails, ClsagAddendum, ClsagMultisig, add_key_image_share},
RctPrunable,
},
transaction::{Input, Transaction},
rpc::{RpcConnection, Rpc},
wallet::{
TransactionError, InternalPayment, SignableTransaction, Decoys, key_image_sort, uniqueness,
},
};
/// FROST signing machine to produce a signed transaction.
pub struct TransactionMachine {
signable: SignableTransaction,
i: Participant,
transcript: RecommendedTranscript,
decoys: Vec<Decoys>,
// Hashed key and scalar offset
key_images: Vec<(EdwardsPoint, Scalar)>,
inputs: Vec<Arc<RwLock<Option<ClsagDetails>>>>,
clsags: Vec<AlgorithmMachine<Ed25519, ClsagMultisig>>,
}
pub struct TransactionSignMachine {
signable: SignableTransaction,
i: Participant,
transcript: RecommendedTranscript,
decoys: Vec<Decoys>,
key_images: Vec<(EdwardsPoint, Scalar)>,
inputs: Vec<Arc<RwLock<Option<ClsagDetails>>>>,
clsags: Vec<AlgorithmSignMachine<Ed25519, ClsagMultisig>>,
our_preprocess: Vec<Preprocess<Ed25519, ClsagAddendum>>,
}
pub struct TransactionSignatureMachine {
tx: Transaction,
clsags: Vec<AlgorithmSignatureMachine<Ed25519, ClsagMultisig>>,
}
impl SignableTransaction {
/// Create a FROST signing machine out of this signable transaction.
/// The height is the Monero blockchain height to synchronize around.
pub async fn multisig<RPC: RpcConnection>(
self,
rpc: &Rpc<RPC>,
keys: ThresholdKeys<Ed25519>,
mut transcript: RecommendedTranscript,
height: usize,
) -> Result<TransactionMachine, TransactionError> {
let mut inputs = vec![];
for _ in 0 .. self.inputs.len() {
// Doesn't resize as that will use a single Rc for the entire Vec
inputs.push(Arc::new(RwLock::new(None)));
}
let mut clsags = vec![];
// Create a RNG out of the input shared keys, which either requires the view key or being every
// sender, and the payments (address and amount), which a passive adversary may be able to know
// depending on how these transactions are coordinated
// Being every sender would already let you note rings which happen to use your transactions
// multiple times, already breaking privacy there
transcript.domain_separate(b"monero_transaction");
// Include the height we're using for our data
// The data itself will be included, making this unnecessary, yet a lot of this is technically
// unnecessary. Anything which further increases security at almost no cost should be followed
transcript.append_message(b"height", u64::try_from(height).unwrap().to_le_bytes());
// Also include the spend_key as below only the key offset is included, so this transcripts the
// sum product
// Useful as transcripting the sum product effectively transcripts the key image, further
// guaranteeing the one time properties noted below
transcript.append_message(b"spend_key", keys.group_key().0.compress().to_bytes());
if let Some(r_seed) = &self.r_seed {
transcript.append_message(b"r_seed", r_seed);
}
for input in &self.inputs {
// These outputs can only be spent once. Therefore, it forces all RNGs derived from this
// transcript (such as the one used to create one time keys) to be unique
transcript.append_message(b"input_hash", input.output.absolute.tx);
transcript.append_message(b"input_output_index", [input.output.absolute.o]);
// Not including this, with a doxxed list of payments, would allow brute forcing the inputs
// to determine RNG seeds and therefore the true spends
transcript.append_message(b"input_shared_key", input.key_offset().to_bytes());
}
for payment in &self.payments {
match payment {
InternalPayment::Payment(payment) => {
transcript.append_message(b"payment_address", payment.0.to_string().as_bytes());
transcript.append_message(b"payment_amount", payment.1.to_le_bytes());
}
InternalPayment::Change(change, amount) => {
transcript.append_message(b"change_address", change.address.to_string().as_bytes());
if let Some(view) = change.view.as_ref() {
transcript.append_message(b"change_view_key", Zeroizing::new(view.to_bytes()));
}
transcript.append_message(b"change_amount", amount.to_le_bytes());
}
}
}
let mut key_images = vec![];
for (i, input) in self.inputs.iter().enumerate() {
// Check this the right set of keys
let offset = keys.offset(dfg::Scalar(input.key_offset()));
if offset.group_key().0 != input.key() {
Err(TransactionError::WrongPrivateKey)?;
}
let clsag = ClsagMultisig::new(transcript.clone(), input.key(), inputs[i].clone());
key_images.push((
clsag.H,
keys.current_offset().unwrap_or(dfg::Scalar::ZERO).0 + self.inputs[i].key_offset(),
));
clsags.push(AlgorithmMachine::new(clsag, offset));
}
// Select decoys
// Ideally, this would be done post entropy, instead of now, yet doing so would require sign
// to be async which isn't preferable. This should be suitably competent though
// While this inability means we can immediately create the input, moving it out of the
// Arc RwLock, keeping it within an Arc RwLock keeps our options flexible
let decoys = Decoys::select(
// Using a seeded RNG with a specific height, committed to above, should make these decoys
// committed to. They'll also be committed to later via the TX message as a whole
&mut ChaCha20Rng::from_seed(transcript.rng_seed(b"decoys")),
rpc,
self.protocol.ring_len(),
height,
&self.inputs,
)
.await
.map_err(TransactionError::RpcError)?;
Ok(TransactionMachine {
signable: self,
i: keys.params().i(),
transcript,
decoys,
key_images,
inputs,
clsags,
})
}
}
impl PreprocessMachine for TransactionMachine {
type Preprocess = Vec<Preprocess<Ed25519, ClsagAddendum>>;
type Signature = Transaction;
type SignMachine = TransactionSignMachine;
fn preprocess<R: RngCore + CryptoRng>(
mut self,
rng: &mut R,
) -> (TransactionSignMachine, Self::Preprocess) {
// Iterate over each CLSAG calling preprocess
let mut preprocesses = Vec::with_capacity(self.clsags.len());
let clsags = self
.clsags
.drain(..)
.map(|clsag| {
let (clsag, preprocess) = clsag.preprocess(rng);
preprocesses.push(preprocess);
clsag
})
.collect();
let our_preprocess = preprocesses.clone();
// We could add further entropy here, and previous versions of this library did so
// As of right now, the multisig's key, the inputs being spent, and the FROST data itself
// will be used for RNG seeds. In order to recreate these RNG seeds, breaking privacy,
// counterparties must have knowledge of the multisig, either the view key or access to the
// coordination layer, and then access to the actual FROST signing process
// If the commitments are sent in plain text, then entropy here also would be, making it not
// increase privacy. If they're not sent in plain text, or are otherwise inaccessible, they
// already offer sufficient entropy. That's why further entropy is not included
(
TransactionSignMachine {
signable: self.signable,
i: self.i,
transcript: self.transcript,
decoys: self.decoys,
key_images: self.key_images,
inputs: self.inputs,
clsags,
our_preprocess,
},
preprocesses,
)
}
}
impl SignMachine<Transaction> for TransactionSignMachine {
type Params = ();
type Keys = ThresholdKeys<Ed25519>;
type Preprocess = Vec<Preprocess<Ed25519, ClsagAddendum>>;
type SignatureShare = Vec<SignatureShare<Ed25519>>;
type SignatureMachine = TransactionSignatureMachine;
fn cache(self) -> CachedPreprocess {
unimplemented!(
"Monero transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn from_cache(_: (), _: ThresholdKeys<Ed25519>, _: CachedPreprocess) -> Result<Self, FrostError> {
unimplemented!(
"Monero transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn read_preprocess<R: Read>(&self, reader: &mut R) -> io::Result<Self::Preprocess> {
self.clsags.iter().map(|clsag| clsag.read_preprocess(reader)).collect()
}
fn sign(
mut self,
mut commitments: HashMap<Participant, Self::Preprocess>,
msg: &[u8],
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
assert!(
msg.is_empty(),
"message was passed to the TransactionMachine when it generates its own"
);
// Find out who's included
// This may not be a valid set of signers yet the algorithm machine will error if it's not
commitments.remove(&self.i); // Remove, if it was included for some reason
let mut included = commitments.keys().copied().collect::<Vec<_>>();
included.push(self.i);
included.sort_unstable();
// Convert the unified commitments to a Vec of the individual commitments
let mut images = vec![EdwardsPoint::identity(); self.clsags.len()];
let mut commitments = (0 .. self.clsags.len())
.map(|c| {
included
.iter()
.map(|l| {
// Add all commitments to the transcript for their entropy
// While each CLSAG will do this as they need to for security, they have their own
// transcripts cloned from this TX's initial premise's transcript. For our TX
// transcript to have the CLSAG data for entropy, it'll have to be added ourselves here
self.transcript.append_message(b"participant", (*l).to_bytes());
let preprocess = if *l == self.i {
self.our_preprocess[c].clone()
} else {
commitments.get_mut(l).ok_or(FrostError::MissingParticipant(*l))?[c].clone()
};
{
let mut buf = vec![];
preprocess.write(&mut buf).unwrap();
self.transcript.append_message(b"preprocess", buf);
}
// While here, calculate the key image
// Clsag will parse/calculate/validate this as needed, yet doing so here as well
// provides the easiest API overall, as this is where the TX is (which needs the key
// images in its message), along with where the outputs are determined (where our
// outputs may need these in order to guarantee uniqueness)
add_key_image_share(
&mut images[c],
self.key_images[c].0,
self.key_images[c].1,
&included,
*l,
preprocess.addendum.key_image.0,
);
Ok((*l, preprocess))
})
.collect::<Result<HashMap<_, _>, _>>()
})
.collect::<Result<Vec<_>, _>>()?;
// Remove our preprocess which shouldn't be here. It was just the easiest way to implement the
// above
for map in &mut commitments {
map.remove(&self.i);
}
// Create the actual transaction
let (mut tx, output_masks) = {
let mut sorted_images = images.clone();
sorted_images.sort_by(key_image_sort);
self.signable.prepare_transaction(
// Technically, r_seed is used for the transaction keys if it's provided
&mut ChaCha20Rng::from_seed(self.transcript.rng_seed(b"transaction_keys_bulletproofs")),
uniqueness(
&sorted_images
.iter()
.map(|image| Input::ToKey { amount: None, key_offsets: vec![], key_image: *image })
.collect::<Vec<_>>(),
),
)
};
// Sort the inputs, as expected
let mut sorted = Vec::with_capacity(self.clsags.len());
while !self.clsags.is_empty() {
sorted.push((
images.swap_remove(0),
self.signable.inputs.swap_remove(0),
self.decoys.swap_remove(0),
self.inputs.swap_remove(0),
self.clsags.swap_remove(0),
commitments.swap_remove(0),
));
}
sorted.sort_by(|x, y| key_image_sort(&x.0, &y.0));
let mut rng = ChaCha20Rng::from_seed(self.transcript.rng_seed(b"pseudo_out_masks"));
let mut sum_pseudo_outs = Scalar::zero();
while !sorted.is_empty() {
let value = sorted.remove(0);
let mask = if sorted.is_empty() {
output_masks - sum_pseudo_outs
} else {
let mask = random_scalar(&mut rng);
sum_pseudo_outs += mask;
mask
};
tx.prefix.inputs.push(Input::ToKey {
amount: None,
key_offsets: value.2.offsets.clone(),
key_image: value.0,
});
*value.3.write().unwrap() = Some(ClsagDetails::new(
ClsagInput::new(value.1.commitment().clone(), value.2).map_err(|_| {
panic!("Signing an input which isn't present in the ring we created for it")
})?,
mask,
));
self.clsags.push(value.4);
commitments.push(value.5);
}
let msg = tx.signature_hash();
// Iterate over each CLSAG calling sign
let mut shares = Vec::with_capacity(self.clsags.len());
let clsags = self
.clsags
.drain(..)
.map(|clsag| {
let (clsag, share) = clsag.sign(commitments.remove(0), &msg)?;
shares.push(share);
Ok(clsag)
})
.collect::<Result<_, _>>()?;
Ok((TransactionSignatureMachine { tx, clsags }, shares))
}
}
impl SignatureMachine<Transaction> for TransactionSignatureMachine {
type SignatureShare = Vec<SignatureShare<Ed25519>>;
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare> {
self.clsags.iter().map(|clsag| clsag.read_share(reader)).collect()
}
fn complete(
mut self,
shares: HashMap<Participant, Self::SignatureShare>,
) -> Result<Transaction, FrostError> {
let mut tx = self.tx;
match tx.rct_signatures.prunable {
RctPrunable::Null => panic!("Signing for RctPrunable::Null"),
RctPrunable::Clsag { ref mut clsags, ref mut pseudo_outs, .. } => {
for (c, clsag) in self.clsags.drain(..).enumerate() {
let (clsag, pseudo_out) = clsag.complete(
shares.iter().map(|(l, shares)| (*l, shares[c].clone())).collect::<HashMap<_, _>>(),
)?;
clsags.push(clsag);
pseudo_outs.push(pseudo_out);
}
}
RctPrunable::MlsagBorromean { .. } | RctPrunable::MlsagBulletproofs { .. } => {
unreachable!("attempted to sign a multisig TX which wasn't CLSAG")
}
}
Ok(tx)
}
}

View File

@@ -1,287 +0,0 @@
use core::ops::Deref;
use std_shims::{sync::OnceLock, collections::HashSet};
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar};
use tokio::sync::Mutex;
use monero_serai::{
random_scalar,
rpc::{HttpRpc, Rpc},
wallet::{
ViewPair, Scanner,
address::{Network, AddressType, AddressSpec, AddressMeta, MoneroAddress},
SpendableOutput,
},
};
pub fn random_address() -> (Scalar, ViewPair, MoneroAddress) {
let spend = random_scalar(&mut OsRng);
let spend_pub = &spend * &ED25519_BASEPOINT_TABLE;
let view = Zeroizing::new(random_scalar(&mut OsRng));
(
spend,
ViewPair::new(spend_pub, view.clone()),
MoneroAddress {
meta: AddressMeta::new(Network::Mainnet, AddressType::Standard),
spend: spend_pub,
view: view.deref() * &ED25519_BASEPOINT_TABLE,
},
)
}
// TODO: Support transactions already on-chain
// TODO: Don't have a side effect of mining blocks more blocks than needed under race conditions
// TODO: mine as much as needed instead of default 10 blocks
pub async fn mine_until_unlocked(rpc: &Rpc<HttpRpc>, addr: &str, tx_hash: [u8; 32]) {
// mine until tx is in a block
let mut height = rpc.get_height().await.unwrap();
let mut found = false;
while !found {
let block = rpc.get_block_by_number(height - 1).await.unwrap();
found = match block.txs.iter().find(|&&x| x == tx_hash) {
Some(_) => true,
None => {
rpc.generate_blocks(addr, 1).await.unwrap();
height += 1;
false
}
}
}
// mine 9 more blocks to unlock the tx
rpc.generate_blocks(addr, 9).await.unwrap();
}
// Mines 60 blocks and returns an unlocked miner TX output.
#[allow(dead_code)]
pub async fn get_miner_tx_output(rpc: &Rpc<HttpRpc>, view: &ViewPair) -> SpendableOutput {
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
// Mine 60 blocks to unlock a miner TX
let start = rpc.get_height().await.unwrap();
rpc
.generate_blocks(&view.address(Network::Mainnet, AddressSpec::Standard).to_string(), 60)
.await
.unwrap();
let block = rpc.get_block_by_number(start).await.unwrap();
scanner.scan(rpc, &block).await.unwrap().swap_remove(0).ignore_timelock().swap_remove(0)
}
pub async fn rpc() -> Rpc<HttpRpc> {
let rpc = HttpRpc::new("http://127.0.0.1:18081".to_string()).unwrap();
// Only run once
if rpc.get_height().await.unwrap() != 1 {
return rpc;
}
let addr = MoneroAddress {
meta: AddressMeta::new(Network::Mainnet, AddressType::Standard),
spend: &random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
view: &random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
}
.to_string();
// Mine 40 blocks to ensure decoy availability
rpc.generate_blocks(&addr, 40).await.unwrap();
// Make sure we recognize the protocol
rpc.get_protocol().await.unwrap();
rpc
}
pub static SEQUENTIAL: OnceLock<Mutex<()>> = OnceLock::new();
#[macro_export]
macro_rules! async_sequential {
($(async fn $name: ident() $body: block)*) => {
$(
#[allow(clippy::tests_outside_test_module)]
#[tokio::test]
async fn $name() {
let guard = runner::SEQUENTIAL.get_or_init(|| tokio::sync::Mutex::new(())).lock().await;
let local = tokio::task::LocalSet::new();
local.run_until(async move {
if let Err(err) = tokio::task::spawn_local(async move { $body }).await {
drop(guard);
Err(err).unwrap()
}
}).await;
}
)*
}
}
#[macro_export]
macro_rules! test {
(
$name: ident,
(
$first_tx: expr,
$first_checks: expr,
),
$((
$tx: expr,
$checks: expr,
)$(,)?),*
) => {
async_sequential! {
async fn $name() {
use core::{ops::Deref, any::Any};
use std::collections::HashSet;
#[cfg(feature = "multisig")]
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::constants::ED25519_BASEPOINT_TABLE;
#[cfg(feature = "multisig")]
use transcript::{Transcript, RecommendedTranscript};
#[cfg(feature = "multisig")]
use frost::{
curve::Ed25519,
Participant,
tests::{THRESHOLD, key_gen},
};
use monero_serai::{
random_scalar,
wallet::{
address::{Network, AddressSpec}, ViewPair, Scanner, Change, SignableTransaction,
SignableTransactionBuilder,
},
};
use runner::{random_address, rpc, mine_until_unlocked, get_miner_tx_output};
type Builder = SignableTransactionBuilder;
// Run each function as both a single signer and as a multisig
#[allow(clippy::redundant_closure_call)]
for multisig in [false, true] {
// Only run the multisig variant if multisig is enabled
if multisig {
#[cfg(not(feature = "multisig"))]
continue;
}
let spend = Zeroizing::new(random_scalar(&mut OsRng));
#[cfg(feature = "multisig")]
let keys = key_gen::<_, Ed25519>(&mut OsRng);
let spend_pub = if !multisig {
spend.deref() * &ED25519_BASEPOINT_TABLE
} else {
#[cfg(not(feature = "multisig"))]
panic!("Multisig branch called without the multisig feature");
#[cfg(feature = "multisig")]
keys[&Participant::new(1).unwrap()].group_key().0
};
let rpc = rpc().await;
let view = ViewPair::new(spend_pub, Zeroizing::new(random_scalar(&mut OsRng)));
let addr = view.address(Network::Mainnet, AddressSpec::Standard);
let miner_tx = get_miner_tx_output(&rpc, &view).await;
let builder = SignableTransactionBuilder::new(
rpc.get_protocol().await.unwrap(),
rpc.get_fee().await.unwrap(),
Some(Change::new(
&ViewPair::new(
&random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng))
),
false
)),
);
let sign = |tx: SignableTransaction| {
let rpc = rpc.clone();
let spend = spend.clone();
#[cfg(feature = "multisig")]
let keys = keys.clone();
async move {
if !multisig {
tx.sign(&mut OsRng, &rpc, &spend).await.unwrap()
} else {
#[cfg(not(feature = "multisig"))]
panic!("Multisig branch called without the multisig feature");
#[cfg(feature = "multisig")]
{
let mut machines = HashMap::new();
for i in (1 ..= THRESHOLD).map(|i| Participant::new(i).unwrap()) {
machines.insert(
i,
tx
.clone()
.multisig(
&rpc,
keys[&i].clone(),
RecommendedTranscript::new(b"Monero Serai Test Transaction"),
rpc.get_height().await.unwrap() - 10,
)
.await
.unwrap(),
);
}
frost::tests::sign_without_caching(&mut OsRng, machines, &[])
}
}
}
};
// TODO: Generate a distinct wallet for each transaction to prevent overlap
let next_addr = addr;
let temp = Box::new({
let mut builder = builder.clone();
builder.add_input(miner_tx);
let (tx, state) = ($first_tx)(rpc.clone(), builder, next_addr).await;
let signed = sign(tx).await;
rpc.publish_transaction(&signed).await.unwrap();
mine_until_unlocked(&rpc, &random_address().2.to_string(), signed.hash()).await;
let tx = rpc.get_transaction(signed.hash()).await.unwrap();
let scanner =
Scanner::from_view(view.clone(), Some(HashSet::new()));
($first_checks)(rpc.clone(), tx, scanner, state).await
});
#[allow(unused_variables, unused_mut, unused_assignments)]
let mut carried_state: Box<dyn Any> = temp;
$(
let (tx, state) = ($tx)(
rpc.clone(),
builder.clone(),
next_addr,
*carried_state.downcast().unwrap()
).await;
let signed = sign(tx).await;
rpc.publish_transaction(&signed).await.unwrap();
mine_until_unlocked(&rpc, &random_address().2.to_string(), signed.hash()).await;
let tx = rpc.get_transaction(signed.hash()).await.unwrap();
#[allow(unused_assignments)]
{
let scanner =
Scanner::from_view(view.clone(), Some(HashSet::new()));
carried_state =
Box::new(($checks)(rpc.clone(), tx, scanner, state).await);
}
)*
}
}
}
}
}

View File

@@ -1,300 +0,0 @@
use rand::RngCore;
use monero_serai::{transaction::Transaction, wallet::address::SubaddressIndex};
mod runner;
test!(
scan_standard_address,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
builder.add_payment(view.address(Network::Mainnet, AddressSpec::Standard), 5);
(builder.build().unwrap(), scanner)
},
|_, tx: Transaction, _, mut state: Scanner| async move {
let output = state.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
},
),
);
test!(
scan_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(0, 1).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
scanner.register_subaddress(subaddress);
builder.add_payment(view.address(Network::Mainnet, AddressSpec::Subaddress(subaddress)), 5);
(builder.build().unwrap(), (scanner, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.subaddress, Some(state.1));
},
),
);
test!(
scan_integrated_address,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(view.address(Network::Mainnet, AddressSpec::Integrated(payment_id)), 5);
(builder.build().unwrap(), (scanner, payment_id))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8])| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, state.1);
},
),
);
test!(
scan_featured_standard,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured { subaddress: None, payment_id: None, guaranteed: false },
),
5,
);
(builder.build().unwrap(), scanner)
},
|_, tx: Transaction, _, mut state: Scanner| async move {
let output = state.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
},
),
);
test!(
scan_featured_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(0, 2).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
scanner.register_subaddress(subaddress);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: None,
guaranteed: false,
},
),
5,
);
(builder.build().unwrap(), (scanner, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.subaddress, Some(state.1));
},
),
);
test!(
scan_featured_integrated,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: None,
payment_id: Some(payment_id),
guaranteed: false,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8])| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, state.1);
},
),
);
test!(
scan_featured_integrated_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(0, 3).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
scanner.register_subaddress(subaddress);
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: Some(payment_id),
guaranteed: false,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8], SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, state.1);
assert_eq!(output.metadata.subaddress, Some(state.2));
},
),
);
test!(
scan_guaranteed_standard,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), None);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured { subaddress: None, payment_id: None, guaranteed: true },
),
5,
);
(builder.build().unwrap(), scanner)
},
|_, tx: Transaction, _, mut state: Scanner| async move {
let output = state.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
},
),
);
test!(
scan_guaranteed_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(1, 0).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), None);
scanner.register_subaddress(subaddress);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: None,
guaranteed: true,
},
),
5,
);
(builder.build().unwrap(), (scanner, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.subaddress, Some(state.1));
},
),
);
test!(
scan_guaranteed_integrated,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), None);
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: None,
payment_id: Some(payment_id),
guaranteed: true,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8])| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, state.1);
},
),
);
test!(
scan_guaranteed_integrated_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(1, 1).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), None);
scanner.register_subaddress(subaddress);
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: Some(payment_id),
guaranteed: true,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8], SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, state.1);
assert_eq!(output.metadata.subaddress, Some(state.2));
},
),
);

View File

@@ -1,118 +0,0 @@
use monero_serai::{
transaction::Transaction,
wallet::{extra::Extra, address::SubaddressIndex, ReceivedOutput, SpendableOutput},
rpc::Rpc,
};
mod runner;
test!(
spend_miner_output,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 5);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, _| async move {
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
},
),
);
test!(
spend_multiple_outputs,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
builder.add_payment(addr, 2000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, _| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
assert_eq!(outputs[1].commitment().amount, 2000000000000);
outputs
},
),
(
|rpc, mut builder: Builder, addr, outputs: Vec<ReceivedOutput>| async move {
for output in outputs {
builder.add_input(SpendableOutput::from(&rpc, output).await.unwrap());
}
builder.add_payment(addr, 6);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, _| async move {
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 6);
},
),
);
test!(
// Ideally, this would be single_R, yet it isn't feasible to apply allow(non_snake_case) here
single_r_subaddress_send,
(
// Consume this builder for an output we can use in the future
// This is needed because we can't get the input from the passed in builder
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, _| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
outputs
},
),
(
|rpc: Rpc<_>, _, _, mut outputs: Vec<ReceivedOutput>| async move {
let change_view = ViewPair::new(
&random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng)),
);
let mut builder = SignableTransactionBuilder::new(
rpc.get_protocol().await.unwrap(),
rpc.get_fee().await.unwrap(),
Some(Change::new(&change_view, false)),
);
builder.add_input(SpendableOutput::from(&rpc, outputs.swap_remove(0)).await.unwrap());
// Send to a subaddress
let sub_view = ViewPair::new(
&random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng)),
);
builder.add_payment(
sub_view
.address(Network::Mainnet, AddressSpec::Subaddress(SubaddressIndex::new(0, 1).unwrap())),
1,
);
(builder.build().unwrap(), (change_view, sub_view))
},
|_, tx: Transaction, _, views: (ViewPair, ViewPair)| async move {
// Make sure the change can pick up its output
let mut change_scanner = Scanner::from_view(views.0, Some(HashSet::new()));
assert!(change_scanner.scan_transaction(&tx).not_locked().len() == 1);
// Make sure the subaddress can pick up its output
let mut sub_scanner = Scanner::from_view(views.1, Some(HashSet::new()));
sub_scanner.register_subaddress(SubaddressIndex::new(0, 1).unwrap());
let sub_outputs = sub_scanner.scan_transaction(&tx).not_locked();
assert!(sub_outputs.len() == 1);
assert_eq!(sub_outputs[0].commitment().amount, 1);
// Make sure only one R was included in TX extra
assert!(Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref())
.unwrap()
.keys()
.unwrap()
.1
.is_none());
},
),
);

View File

@@ -1,247 +0,0 @@
use std::{
collections::{HashSet, HashMap},
str::FromStr,
};
use rand_core::{OsRng, RngCore};
use serde::Deserialize;
use serde_json::json;
use monero_rpc::{
monero::{
Amount, Address,
cryptonote::{hash::Hash, subaddress::Index},
util::address::PaymentId,
},
TransferOptions, WalletClient,
};
use monero_serai::{
transaction::Transaction,
rpc::{HttpRpc, Rpc},
wallet::{
address::{Network, AddressSpec, SubaddressIndex, MoneroAddress},
extra::{MAX_TX_EXTRA_NONCE_SIZE, Extra},
Scanner,
},
};
mod runner;
async fn make_integrated_address(payment_id: [u8; 8]) -> String {
#[derive(Deserialize, Debug)]
struct IntegratedAddressResponse {
integrated_address: String,
}
let rpc = HttpRpc::new("http://127.0.0.1:6061".to_string()).unwrap();
let res = rpc
.json_rpc_call::<IntegratedAddressResponse>(
"make_integrated_address",
Some(json!({ "payment_id": hex::encode(payment_id) })),
)
.await
.unwrap();
res.integrated_address
}
async fn initialize_rpcs() -> (WalletClient, Rpc<HttpRpc>, monero_rpc::monero::Address) {
let wallet_rpc =
monero_rpc::RpcClientBuilder::new().build("http://127.0.0.1:6061").unwrap().wallet();
let daemon_rpc = runner::rpc().await;
let address_resp = wallet_rpc.get_address(0, None).await;
let wallet_rpc_addr = if address_resp.is_ok() {
address_resp.unwrap().address
} else {
wallet_rpc.create_wallet("wallet".to_string(), None, "English".to_string()).await.unwrap();
let addr = wallet_rpc.get_address(0, None).await.unwrap().address;
daemon_rpc.generate_blocks(&addr.to_string(), 70).await.unwrap();
addr
};
(wallet_rpc, daemon_rpc, wallet_rpc_addr)
}
async fn from_wallet_rpc_to_self(spec: AddressSpec) {
// initialize rpc
let (wallet_rpc, daemon_rpc, wallet_rpc_addr) = initialize_rpcs().await;
// make an addr
let (_, view_pair, _) = runner::random_address();
let addr = Address::from_str(&view_pair.address(Network::Mainnet, spec).to_string()).unwrap();
// refresh & make a tx
wallet_rpc.refresh(None).await.unwrap();
let tx = wallet_rpc
.transfer(
HashMap::from([(addr, Amount::ONE_XMR)]),
monero_rpc::TransferPriority::Default,
TransferOptions::default(),
)
.await
.unwrap();
let tx_hash: [u8; 32] = tx.tx_hash.0.try_into().unwrap();
// unlock it
runner::mine_until_unlocked(&daemon_rpc, &wallet_rpc_addr.to_string(), tx_hash).await;
// create the scanner
let mut scanner = Scanner::from_view(view_pair, Some(HashSet::new()));
if let AddressSpec::Subaddress(index) = spec {
scanner.register_subaddress(index);
}
// retrieve it and confirm
let tx = daemon_rpc.get_transaction(tx_hash).await.unwrap();
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
match spec {
AddressSpec::Subaddress(index) => assert_eq!(output.metadata.subaddress, Some(index)),
AddressSpec::Integrated(payment_id) => {
assert_eq!(output.metadata.payment_id, payment_id);
assert_eq!(output.metadata.subaddress, None);
}
AddressSpec::Standard | AddressSpec::Featured { .. } => {
assert_eq!(output.metadata.subaddress, None)
}
}
assert_eq!(output.commitment().amount, 1000000000000);
}
async_sequential!(
async fn receipt_of_wallet_rpc_tx_standard() {
from_wallet_rpc_to_self(AddressSpec::Standard).await;
}
async fn receipt_of_wallet_rpc_tx_subaddress() {
from_wallet_rpc_to_self(AddressSpec::Subaddress(SubaddressIndex::new(0, 1).unwrap())).await;
}
async fn receipt_of_wallet_rpc_tx_integrated() {
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
from_wallet_rpc_to_self(AddressSpec::Integrated(payment_id)).await;
}
);
test!(
send_to_wallet_rpc_standard,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, wallet_rpc_addr) = initialize_rpcs().await;
// add destination
builder.add_payment(
MoneroAddress::from_str(Network::Mainnet, &wallet_rpc_addr.to_string()).unwrap(),
1000000,
);
(builder.build().unwrap(), (wallet_rpc,))
},
|_, tx: Transaction, _, data: (WalletClient,)| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: 0 });
},
),
);
test!(
send_to_wallet_rpc_subaddress,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, _) = initialize_rpcs().await;
// make the addr
let (subaddress, index) = wallet_rpc.create_address(0, None).await.unwrap();
builder.add_payment(
MoneroAddress::from_str(Network::Mainnet, &subaddress.to_string()).unwrap(),
1000000,
);
(builder.build().unwrap(), (wallet_rpc, index))
},
|_, tx: Transaction, _, data: (WalletClient, u32)| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: data.1 });
// Make sure only one R was included in TX extra
assert!(Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref())
.unwrap()
.keys()
.unwrap()
.1
.is_none());
},
),
);
test!(
send_to_wallet_rpc_integrated,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, _) = initialize_rpcs().await;
// make the addr
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
let addr = make_integrated_address(payment_id).await;
builder.add_payment(MoneroAddress::from_str(Network::Mainnet, &addr).unwrap(), 1000000);
(builder.build().unwrap(), (wallet_rpc, payment_id))
},
|_, tx: Transaction, _, data: (WalletClient, [u8; 8])| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: 0 });
assert_eq!(transfer.payment_id.0, PaymentId::from_slice(&data.1));
},
),
);
test!(
send_to_wallet_rpc_with_arb_data,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, wallet_rpc_addr) = initialize_rpcs().await;
// add destination
builder.add_payment(
MoneroAddress::from_str(Network::Mainnet, &wallet_rpc_addr.to_string()).unwrap(),
1000000,
);
// Make 2 data that is the full 255 bytes
for _ in 0 .. 2 {
// Subtract 1 since we prefix data with 127
let data = vec![b'a'; MAX_TX_EXTRA_NONCE_SIZE - 1];
builder.add_data(data).unwrap();
}
(builder.build().unwrap(), (wallet_rpc,))
},
|_, tx: Transaction, _, data: (WalletClient,)| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: 0 });
},
),
);

View File

@@ -1,13 +1,25 @@
[package] [package]
name = "serai-db" name = "serai-db"
version = "0.1.0" version = "0.1.1"
description = "A simple database trait and backends for it" description = "A simple database trait and backends for it"
license = "MIT" license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/db" repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"] authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = [] keywords = []
edition = "2021" edition = "2021"
rust-version = "1.71"
[package.metadata.docs.rs] [package.metadata.docs.rs]
all-features = true all-features = true
rustdoc-args = ["--cfg", "docsrs"] rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
parity-db = { version = "0.4", default-features = false, optional = true }
rocksdb = { version = "0.23", default-features = false, features = ["zstd"], optional = true }
[features]
parity-db = ["dep:parity-db"]
rocksdb = ["dep:rocksdb"]

8
common/db/README.md Normal file
View File

@@ -0,0 +1,8 @@
# Serai DB
An inefficient, minimal abstraction around databases.
The abstraction offers `get`, `put`, and `del` with helper functions and macros
built on top. Database iteration is not offered, forcing the caller to manually
implement indexing schemes. This ensures wide compatibility across abstracted
databases.

180
common/db/src/create_db.rs Normal file
View File

@@ -0,0 +1,180 @@
#[doc(hidden)]
pub fn serai_db_key(
db_dst: &'static [u8],
item_dst: &'static [u8],
key: impl AsRef<[u8]>,
) -> Vec<u8> {
let db_len = u8::try_from(db_dst.len()).unwrap();
let dst_len = u8::try_from(item_dst.len()).unwrap();
[[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat()
}
/// Creates a series of structs which provide namespacing for keys
///
/// # Description
///
/// Creates a unit struct and a default implementation for the `key`, `get`, and `set`. The macro
/// uses a syntax similar to defining a function. Parameters are concatenated to produce a key,
/// they must be `scale` encodable. The return type is used to auto encode and decode the database
/// value bytes using `borsh`.
///
/// # Arguments
///
/// * `db_name` - A database name
/// * `field_name` - An item name
/// * `args` - Comma separated list of key arguments
/// * `field_type` - The return type
///
/// # Example
///
/// ```ignore
/// create_db!(
/// TributariesDb {
/// AttemptsDb: (key_bytes: &[u8], attempt_id: u32) -> u64,
/// ExpiredDb: (genesis: [u8; 32]) -> Vec<u8>
/// }
/// )
/// ```
#[macro_export]
macro_rules! create_db {
($db_name: ident {
$(
$field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => {
$(
#[derive(Clone, Debug)]
pub(crate) struct $field_name$(
<$($generic_name: $generic_type),+>
)?$(
(core::marker::PhantomData<($($generic_name),+)>)
)?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key(
stringify!($db_name).as_bytes(),
stringify!($field_name).as_bytes(),
($($arg),*).encode()
)
}
pub(crate) fn set(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*,
data: &$field_type
) {
let key = Self::key($($arg),*);
txn.put(&key, borsh::to_vec(data).unwrap());
}
pub(crate) fn get(
getter: &impl Get,
$($arg: $arg_type),*
) -> Option<$field_type> {
getter.get(Self::key($($arg),*)).map(|data| {
borsh::from_slice(data.as_ref()).unwrap()
})
}
// Returns a PhantomData of all generic types so if the generic was only used in the value,
// not the keys, this doesn't have unused generic types
#[allow(dead_code)]
pub(crate) fn del(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> core::marker::PhantomData<($($($generic_name),+)?)> {
txn.del(&Self::key($($arg),*));
core::marker::PhantomData
}
pub(crate) fn take(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let key = Self::key($($arg),*);
let res = txn.get(&key).map(|data| borsh::from_slice(data.as_ref()).unwrap());
if res.is_some() {
txn.del(key);
}
res
}
}
)*
};
}
#[macro_export]
macro_rules! db_channel {
($db_name: ident {
$($field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => {
$(
create_db! {
$db_name {
$field_name: $(<$($generic_name: $generic_type),+>)?(
$($arg: $arg_type,)*
index: u32
) -> $field_type
}
}
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn send(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
, value: &$field_type
) {
// Use index 0 to store the amount of messages
let messages_sent_key = Self::key($($arg,)* 0);
let messages_sent = txn.get(&messages_sent_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
txn.put(&messages_sent_key, (messages_sent + 1).to_le_bytes());
// + 2 as index 1 is used for the amount of messages read
// Using distinct counters enables send to be called without mutating anything recv may
// at the same time
let index_to_use = messages_sent + 2;
Self::set(txn, $($arg,)* index_to_use, value);
}
pub(crate) fn peek(
getter: &impl Get
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = getter.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
let index_to_read = messages_recvd + 2;
Self::get(getter, $($arg,)* index_to_read)
}
pub(crate) fn try_recv(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = txn.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
let index_to_read = messages_recvd + 2;
let res = Self::get(txn, $($arg,)* index_to_read);
if res.is_some() {
Self::del(txn, $($arg,)* index_to_read);
txn.put(&messages_recvd_key, (messages_recvd + 1).to_le_bytes());
}
res
}
}
)*
};
}

View File

@@ -1,102 +1,100 @@
use core::fmt::Debug; mod create_db;
extern crate alloc; pub use create_db::*;
use alloc::sync::Arc;
use std::{
sync::RwLock,
collections::{HashSet, HashMap},
};
/// An object implementing get. mod mem;
pub trait Get: Send + Sync + Debug { pub use mem::*;
#[cfg(feature = "rocksdb")]
mod rocks;
#[cfg(feature = "rocksdb")]
pub use rocks::{RocksDB, new_rocksdb};
#[cfg(feature = "parity-db")]
mod parity_db;
#[cfg(feature = "parity-db")]
pub use parity_db::{ParityDb, new_parity_db};
/// An object implementing `get`.
pub trait Get {
/// Get a value from the database.
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>; fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>;
} }
/// An atomic database operation. /// An atomic database transaction.
///
/// A transaction is only required to atomically commit. It is not required that two `Get` calls
/// made with the same transaction return the same result, if another transaction wrote to that
/// key.
///
/// If two transactions are created, and both write (including deletions) to the same key, behavior
/// is undefined. The transaction may block, deadlock, panic, overwrite one of the two values
/// randomly, or any other action, at time of write or at time of commit.
#[must_use] #[must_use]
pub trait DbTxn: Send + Sync + Debug + Get { pub trait DbTxn: Sized + Send + Get {
/// Write a value to this key.
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>); fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>);
/// Delete the value from this key.
fn del(&mut self, key: impl AsRef<[u8]>); fn del(&mut self, key: impl AsRef<[u8]>);
/// Commit this transaction.
fn commit(self); fn commit(self);
/// Close this transaction.
///
/// This is equivalent to `Drop` on transactions which can be dropped. This is explicit and works
/// with transactions which can't be dropped.
fn close(self) {
drop(self);
}
} }
/// A database supporting atomic operations. // Credit for the idea goes to https://jack.wrenn.fyi/blog/undroppable
pub trait Db: 'static + Send + Sync + Clone + Debug + Get { pub struct Undroppable<T>(Option<T>);
impl<T> Drop for Undroppable<T> {
fn drop(&mut self) {
// Use an assertion at compile time to prevent this code from compiling if generated
#[allow(clippy::assertions_on_constants)]
const {
assert!(false, "Undroppable DbTxn was dropped. Ensure all code paths call commit or close");
}
}
}
impl<T: DbTxn> Get for Undroppable<T> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
self.0.as_ref().unwrap().get(key)
}
}
impl<T: DbTxn> DbTxn for Undroppable<T> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.0.as_mut().unwrap().put(key, value);
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.0.as_mut().unwrap().del(key);
}
fn commit(mut self) {
self.0.take().unwrap().commit();
let _ = core::mem::ManuallyDrop::new(self);
}
fn close(mut self) {
drop(self.0.take().unwrap());
let _ = core::mem::ManuallyDrop::new(self);
}
}
/// A database supporting atomic transaction.
pub trait Db: 'static + Send + Sync + Clone + Get {
/// The type representing a database transaction.
type Transaction<'a>: DbTxn; type Transaction<'a>: DbTxn;
/// Calculate a key for a database entry.
///
/// Keys are separated by the database, the item within the database, and the item's key itself.
fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> { fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
let db_len = u8::try_from(db_dst.len()).unwrap(); let db_len = u8::try_from(db_dst.len()).unwrap();
let dst_len = u8::try_from(item_dst.len()).unwrap(); let dst_len = u8::try_from(item_dst.len()).unwrap();
[[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat() [[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat()
} }
fn txn(&mut self) -> Self::Transaction<'_>; /// Open a new transaction which may be dropped.
} fn unsafe_txn(&mut self) -> Self::Transaction<'_>;
/// Open a new transaction which must be committed or closed.
/// An atomic operation for the in-memory databae. fn txn(&mut self) -> Undroppable<Self::Transaction<'_>> {
#[must_use] Undroppable(Some(self.unsafe_txn()))
#[derive(PartialEq, Eq, Debug)]
pub struct MemDbTxn<'a>(&'a MemDb, HashMap<Vec<u8>, Vec<u8>>, HashSet<Vec<u8>>);
impl<'a> Get for MemDbTxn<'a> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
if self.2.contains(key.as_ref()) {
return None;
}
self.1.get(key.as_ref()).cloned().or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned())
} }
} }
impl<'a> DbTxn for MemDbTxn<'a> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.2.remove(key.as_ref());
self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec());
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.1.remove(key.as_ref());
self.2.insert(key.as_ref().to_vec());
}
fn commit(mut self) {
let mut db = self.0 .0.write().unwrap();
for (key, value) in self.1.drain() {
db.insert(key, value);
}
for key in self.2 {
db.remove(&key);
}
}
}
/// An in-memory database.
#[derive(Clone, Debug)]
pub struct MemDb(Arc<RwLock<HashMap<Vec<u8>, Vec<u8>>>>);
impl PartialEq for MemDb {
fn eq(&self, other: &Self) -> bool {
*self.0.read().unwrap() == *other.0.read().unwrap()
}
}
impl Eq for MemDb {}
impl Default for MemDb {
fn default() -> Self {
Self(Arc::new(RwLock::new(HashMap::new())))
}
}
impl MemDb {
/// Create a new in-memory database.
pub fn new() -> Self {
Self::default()
}
}
impl Get for MemDb {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
self.0.read().unwrap().get(key.as_ref()).cloned()
}
}
impl Db for MemDb {
type Transaction<'a> = MemDbTxn<'a>;
fn txn(&mut self) -> MemDbTxn<'_> {
MemDbTxn(self, HashMap::new(), HashSet::new())
}
}
// TODO: Also bind RocksDB

80
common/db/src/mem.rs Normal file
View File

@@ -0,0 +1,80 @@
use core::fmt::Debug;
use std::{
sync::{Arc, RwLock},
collections::{HashSet, HashMap},
};
use crate::*;
/// An atomic operation for the in-memory database.
#[must_use]
#[derive(PartialEq, Eq, Debug)]
pub struct MemDbTxn<'a>(&'a MemDb, HashMap<Vec<u8>, Vec<u8>>, HashSet<Vec<u8>>);
impl Get for MemDbTxn<'_> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
if self.2.contains(key.as_ref()) {
return None;
}
self
.1
.get(key.as_ref())
.cloned()
.or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned())
}
}
impl DbTxn for MemDbTxn<'_> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.2.remove(key.as_ref());
self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec());
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.1.remove(key.as_ref());
self.2.insert(key.as_ref().to_vec());
}
fn commit(mut self) {
let mut db = self.0 .0.write().unwrap();
for (key, value) in self.1.drain() {
db.insert(key, value);
}
for key in self.2 {
db.remove(&key);
}
}
}
/// An in-memory database.
#[derive(Clone, Debug)]
pub struct MemDb(Arc<RwLock<HashMap<Vec<u8>, Vec<u8>>>>);
impl PartialEq for MemDb {
fn eq(&self, other: &MemDb) -> bool {
*self.0.read().unwrap() == *other.0.read().unwrap()
}
}
impl Eq for MemDb {}
impl Default for MemDb {
fn default() -> MemDb {
MemDb(Arc::new(RwLock::new(HashMap::new())))
}
}
impl MemDb {
/// Create a new in-memory database.
pub fn new() -> MemDb {
MemDb::default()
}
}
impl Get for MemDb {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
self.0.read().unwrap().get(key.as_ref()).cloned()
}
}
impl Db for MemDb {
type Transaction<'a> = MemDbTxn<'a>;
fn unsafe_txn(&mut self) -> MemDbTxn<'_> {
MemDbTxn(self, HashMap::new(), HashSet::new())
}
}

View File

@@ -0,0 +1,47 @@
use std::sync::Arc;
pub use ::parity_db::{Options, Db as ParityDb};
use crate::*;
#[must_use]
pub struct Transaction<'a>(&'a Arc<ParityDb>, Vec<(u8, Vec<u8>, Option<Vec<u8>>)>);
impl Get for Transaction<'_> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
let mut res = self.0.get(&key);
for change in &self.1 {
if change.1 == key.as_ref() {
res.clone_from(&change.2);
}
}
res
}
}
impl DbTxn for Transaction<'_> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.1.push((0, key.as_ref().to_vec(), Some(value.as_ref().to_vec())))
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.1.push((0, key.as_ref().to_vec(), None))
}
fn commit(self) {
self.0.commit(self.1).unwrap()
}
}
impl Get for Arc<ParityDb> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
ParityDb::get(self, 0, key.as_ref()).unwrap()
}
}
impl Db for Arc<ParityDb> {
type Transaction<'a> = Transaction<'a>;
fn unsafe_txn(&mut self) -> Self::Transaction<'_> {
Transaction(self, vec![])
}
}
pub fn new_parity_db(path: &str) -> Arc<ParityDb> {
Arc::new(ParityDb::open_or_create(&Options::with_columns(std::path::Path::new(path), 1)).unwrap())
}

66
common/db/src/rocks.rs Normal file
View File

@@ -0,0 +1,66 @@
use std::sync::Arc;
use rocksdb::{
DBCompressionType, ThreadMode, SingleThreaded, LogLevel, WriteOptions,
Transaction as RocksTransaction, Options, OptimisticTransactionDB,
};
use crate::*;
#[must_use]
pub struct Transaction<'a, T: ThreadMode>(
RocksTransaction<'a, OptimisticTransactionDB<T>>,
&'a OptimisticTransactionDB<T>,
);
impl<T: ThreadMode> Get for Transaction<'_, T> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
self.0.get(key).expect("couldn't read from RocksDB via transaction")
}
}
impl<T: ThreadMode> DbTxn for Transaction<'_, T> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.0.put(key, value).expect("couldn't write to RocksDB via transaction")
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.0.delete(key).expect("couldn't delete from RocksDB via transaction")
}
fn commit(self) {
self.0.commit().expect("couldn't commit to RocksDB via transaction");
self.1.flush_wal(true).expect("couldn't flush RocksDB WAL");
self.1.flush().expect("couldn't flush RocksDB");
}
}
impl<T: ThreadMode> Get for Arc<OptimisticTransactionDB<T>> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
OptimisticTransactionDB::get(self, key).expect("couldn't read from RocksDB")
}
}
impl<T: Send + ThreadMode + 'static> Db for Arc<OptimisticTransactionDB<T>> {
type Transaction<'a> = Transaction<'a, T>;
fn unsafe_txn(&mut self) -> Self::Transaction<'_> {
let mut opts = WriteOptions::default();
opts.set_sync(true);
Transaction(self.transaction_opt(&opts, &Default::default()), &**self)
}
}
pub type RocksDB = Arc<OptimisticTransactionDB<SingleThreaded>>;
pub fn new_rocksdb(path: &str) -> RocksDB {
let mut options = Options::default();
options.create_if_missing(true);
options.set_compression_type(DBCompressionType::Zstd);
options.set_wal_compression_type(DBCompressionType::Zstd);
// 10 MB
options.set_max_total_wal_size(10 * 1024 * 1024);
options.set_wal_size_limit_mb(10);
options.set_log_level(LogLevel::Warn);
// 1 MB
options.set_max_log_file_size(1024 * 1024);
options.set_recycle_log_file_num(1);
Arc::new(OptimisticTransactionDB::open(&options, path).unwrap())
}

17
common/env/Cargo.toml vendored Normal file
View File

@@ -0,0 +1,17 @@
[package]
name = "serai-env"
version = "0.1.0"
description = "A common library for Serai apps to access environment variables"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/common/env"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true

9
common/env/src/lib.rs vendored Normal file
View File

@@ -0,0 +1,9 @@
#![cfg_attr(docsrs, feature(doc_cfg))]
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
// Obtain a variable from the Serai environment/secret store.
pub fn var(variable: &str) -> Option<String> {
// TODO: Move this to a proper secret store
// TODO: Unset this variable
std::env::var(variable).ok()
}

View File

@@ -0,0 +1,20 @@
[package]
name = "patchable-async-sleep"
version = "0.1.0"
description = "An async sleep function, patchable to the preferred runtime"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-async-sleep"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["async", "sleep", "tokio", "smol", "async-std"]
edition = "2021"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
tokio = { version = "1", default-features = false, features = [ "time"] }

Some files were not shown because too many files have changed in this diff Show More