495 Commits

Author SHA1 Message Date
Luke Parker
ce3b90541e Make transactions undroppable
coordinator/cosign/src/delay.rs literally demonstrates how we'd need to rewrite
our handling of transactions with this change. It can be cleaned up a bit but
already identifies ergonomic issues. It also doesn't model passing an &mut txn
to an async function, which would also require using the droppable wrapper
struct.

To locally see this build, run

RUSTFLAGS="-Zpanic_abort_tests -C panic=abort" cargo +nightly build -p serai-cosign --all-targets

To locally see this fail to build, run

cargo build -p serai-cosign --all-targets

While it doesn't say which line causes it fail to build, the only distinction
is panic=unwind.

For more context, please see #578.
2025-01-15 03:56:59 -05:00
Luke Parker
cb410cc4e0 Correct how we handle rounding errors within the penalty fn
We explicitly no longer slash stakes but we still set the maximum slash to the
allocated stake + the rewards. Now, the reward slash is bound to the rewards
and the stake slash is bound to the stake. This prevents an improperly rounded
reward slash from effecting a stake slash.
2025-01-15 02:46:31 -05:00
Luke Parker
6c145a5ec3 Disable offline, disruptive slashes
Reasoning commented in codebase
2025-01-14 11:44:13 -05:00
Luke Parker
a7fef2ba7a Redesign Slash/SlashReport types with a function to calculate the penalty 2025-01-14 07:51:39 -05:00
Luke Parker
291ebf5e24 Have serai-task warnings print with the name of the task 2025-01-14 02:52:26 -05:00
Luke Parker
5e0e91c85d Add tasks to publish data onto Serai 2025-01-14 01:58:26 -05:00
Luke Parker
b5a6b0693e Add a proper error type to ContinuallyRan
This isn't necessary. Because we just log the error, we never match off of it,
we don't need any structure beyond String (or now Debug, which still gives us
a way to print the error). This is for the ergonomics of not having to
constantly write `.map_err(|e| format!("{e:?}"))`.
2025-01-12 18:29:08 -05:00
Luke Parker
3cc2abfedc Add a task to publish slash reports 2025-01-12 17:47:48 -05:00
Luke Parker
0ce9aad9b2 Add flow to add transactions onto Tributaries 2025-01-12 07:32:45 -05:00
Luke Parker
e35aa04afb Start handling messages from the processor
Does route ProcessorMessage::CosignedBlock. Rest are stubbed with TODO.
2025-01-12 06:07:55 -05:00
Luke Parker
e7de5125a2 Have processor-messages use CosignIntent/SignedCosign, not the historic cosign format
Has yet to update the processor accordingly.
2025-01-12 05:52:33 -05:00
Luke Parker
158140c3a7 Add a proper error for intake_cosign 2025-01-12 05:49:17 -05:00
Luke Parker
df9a9adaa8 Remove direct dependencies of void, async-trait 2025-01-12 03:48:43 -05:00
Luke Parker
d854807edd Make message_queue::client::Client::send fallible
Allows tasks to report the errors themselves and handle retry in our
standardized way.
2025-01-11 21:57:58 -05:00
Luke Parker
f501d46d44 Correct disabling of Nagle's algorithm 2025-01-11 06:54:43 -05:00
Luke Parker
74106b025f Publish SlashReport onto the Tributary 2025-01-11 06:51:55 -05:00
Luke Parker
e731b546ab Update documentation 2025-01-11 05:13:43 -05:00
Luke Parker
77d60660d2 Move spawn_cosign from main.rs into tributary.rs
Also refines the tasks within tributary.rs a good bit.
2025-01-11 05:12:56 -05:00
Luke Parker
3c664ff05f Re-arrange coordinator/
coordinator/tributary was tributary-chain. This crate has been renamed
tributary-sdk and moved to coordinator/tributary-sdk.

coordinator/src/tributary was our instantion of a Tributary, the Transaction
type and scan task. This has been moved to coordinator/tributary.

The main reason for this was due to coordinator/main.rs becoming untidy. There
is now a collection of clean, independent APIs present in the codebase.
coordinator/main.rs is to compose them. Sometimes, these compositions are a bit
silly (reading from a channel just to forward the message to a distinct
channel). That's more than fine as the code is still readable and the value
from the cleanliness of the APIs composed far exceeds the nits from having
these odd compositions.

This breaks down a bit as we now define a global database, and have some APIs
interact with multiple other APIs.

coordinator/src/tributary was a self-contained, clean API. The recently added
task present in coordinator/tributary/mod.rs, which bound it to the rest of the
Coordinator, wasn't.

Now, coordinator/src is solely the API compositions, and all self-contained
APIs are their own crates.
2025-01-11 04:14:21 -05:00
Luke Parker
c05b0c9eba Handle Canonical, NewSet from serai-coordinator-substrate 2025-01-11 03:07:15 -05:00
Luke Parker
6d5049cab2 Move the task providing transactions onto the Tributary to the Tributary module
Slims down the main file a bit
2025-01-11 02:13:23 -05:00
Luke Parker
1419ba570a Route from tributary scanner to message-queue 2025-01-11 01:55:36 -05:00
Luke Parker
542bf2170a Provide Cosign/CosignIntent for Tributaries 2025-01-11 01:31:28 -05:00
Luke Parker
378d6b90cf Delete old Tributaries on reboot 2025-01-10 20:10:05 -05:00
Luke Parker
cbe83956aa Flesh out Coordinator main
Lot of TODOs as the APIs are all being routed together.
2025-01-10 02:24:24 -05:00
Luke Parker
091d485fd8 Have the Tributary scanner DB be distinct from the cosign DB
Allows deleting the entire Tributary scanner DB upon retiry.
2025-01-10 02:22:58 -05:00
Luke Parker
2a3eaf4d7e Wrap the entire Libp2p object in an Arc
Makes `Clone` calls significantly cheaper as now only the outer Arc is cloned
(the inner ones have been removed). Also wraps uses of Serai in an Arc as we
shouldn't actually need/want multiple caller connection pools.
2025-01-10 01:26:07 -05:00
Luke Parker
23122712cb Document validator jailing upon participation failures and slash report determination
These are TODOs. I just wanted to ensure this was written down and each seemed
too small for GH issues.
2025-01-09 19:50:39 -05:00
Luke Parker
47eb793ce9 Slash upon Tendermint evidence
Decoding slash evidence requires specifying the instantiated generic
`TendermintNetwork`. While irrelevant, that generic includes a type satisfying
`tributary::P2p`. It was only possible to route now that we've redone the P2P
API.
2025-01-09 06:58:00 -05:00
Luke Parker
9b0b5fd1e2 Have serai-cosign index finalized blocks' numbers 2025-01-09 06:57:26 -05:00
Luke Parker
893a24a1cc Better document bounds in serai-coordinator-p2p 2025-01-09 06:57:12 -05:00
Luke Parker
b101e2211a Complete serai-coordinator-p2p 2025-01-09 06:23:14 -05:00
Luke Parker
201a444e89 Remove tokio dependency from serai-coordinator-p2p
Re-implements tokio::mpsc::oneshot with a thin wrapper around async-channel.

Also replaces futures-util with futures-lite.
2025-01-09 02:16:05 -05:00
Luke Parker
9833911e06 Promote Request::Heartbeat from an enum variant to a struct 2025-01-09 01:41:42 -05:00
Luke Parker
465e8498c4 Make the coordinator's P2P modules their own crates 2025-01-09 01:26:25 -05:00
Luke Parker
adf20773ac Add libp2p module documentation 2025-01-09 00:40:07 -05:00
Luke Parker
295c1bd044 Document improper handling of session rotation in P2P allow list 2025-01-09 00:16:45 -05:00
Luke Parker
dda6e3e899 Limit each peer to one connection
Prevents dialing the same peer multiple times (successfully).
2025-01-09 00:06:51 -05:00
Luke Parker
75a00f2a1a Add allow_block_list to libp2p
The check in validators prevented connections from non-validators.
Non-validators could still participate in the network if they laundered their
connection through a malicious validator. allow_block_list ensures that peers,
not connections, are explicitly limited to validators.
2025-01-08 23:54:27 -05:00
Luke Parker
6cde2bb6ef Correct and document topic subscription 2025-01-08 23:16:04 -05:00
Luke Parker
20326bba73 Replace KeepAlive with ping
This is more standard and allows measuring latency.
2025-01-08 23:01:36 -05:00
Luke Parker
ce83b41712 Finish mapping Libp2p to the P2p trait API 2025-01-08 19:39:09 -05:00
Luke Parker
b2bd5d3a44 Remove Debug bound on tributary::P2p 2025-01-08 17:40:32 -05:00
Luke Parker
de2d6568a4 Actually implement the Peer abstraction for Libp2p 2025-01-08 17:40:08 -05:00
Luke Parker
fd9b464b35 Add a trait for the P2p network used in the coordinator
Moves all of the Libp2p code to a dedicated directory. Makes the Heartbeat task
abstract over any P2p network.
2025-01-08 17:01:37 -05:00
Luke Parker
376a66b000 Remove async-trait from tendermint-machine, tributary-chain 2025-01-08 16:41:11 -05:00
Luke Parker
2121a9b131 Spawn the task to select validators to dial 2025-01-07 18:17:36 -05:00
Luke Parker
419223c54e Build the swarm
Moves UpdateSharedValidatorsTask to validators.rs. While prior planned to
re-use a validators object across connecting and peer state management, the
current plan is to use an independent validators object for each to minimize
any contention. They should be built infrequently enough, and cheap enough to
update in the majority case (due to quickly checking if an update is needed),
that this is fine.
2025-01-07 18:09:25 -05:00
Luke Parker
a731c0005d Finish routing our own channel abstraction around the Swarm event stream 2025-01-07 16:51:56 -05:00
Luke Parker
f27e4e3202 Move the WIP SwarmTask to its own file 2025-01-07 16:34:19 -05:00
Luke Parker
f55165e016 Add channels to send requests/recv responses 2025-01-07 15:51:15 -05:00
Luke Parker
d9e9887d34 Run the dial task whenever we have a peer disconnect 2025-01-07 15:36:42 -05:00
Luke Parker
82e753db30 Document risk of eclipse in the dial task 2025-01-07 15:35:34 -05:00
Luke Parker
052388285b Remove TaskHandle::close
TaskHandle::close meant run_now may panic if the task was closed. Now, tasks
are only closed when all handles are dropped, causing all handles to point to
running tasks (ensuring run_now won't panic).
2025-01-07 15:26:41 -05:00
Luke Parker
47a4e534ef Update serai-processor-signers to VariantSignid::Batch([u8; 32]) 2025-01-07 15:26:23 -05:00
Luke Parker
257f691277 Start filling out message handling in SwarmTask 2025-01-05 01:23:28 -05:00
Luke Parker
c6d0fb477c Inline noise into OnlyValidators
libp2p does support (noise, OnlyValidators) but it'll interpret it as either,
not a chain. This will act as the desired chain.
2025-01-05 00:55:25 -05:00
Luke Parker
96518500b1 Don't hold the shared Validators write lock while making requests to Serai 2025-01-05 00:29:11 -05:00
Luke Parker
2b8f481364 Parallelize requests within Validators::update 2025-01-05 00:17:05 -05:00
Luke Parker
479ca0410a Add commentary on the use of FuturesOrdered 2025-01-04 23:28:54 -05:00
Luke Parker
9a5a661d04 Start on the task to manage the swarm 2025-01-04 23:28:29 -05:00
Luke Parker
3daeea09e6 Only let active Serai validators connect over P2P 2025-01-04 22:21:23 -05:00
Luke Parker
a64e2004ab Dial new peers when we don't have the target amount 2025-01-04 18:04:24 -05:00
Luke Parker
f9f6d40695 Use Serai validator keys as PeerIds 2025-01-04 18:03:37 -05:00
Luke Parker
4836c1676b Don't consider the Serai set in the cosigning protocol
The Serai set SHOULD be banned from setting keys so this SHOULD be unreachable.
It's now explicitly unreachable.
2025-01-04 13:52:17 -05:00
Luke Parker
985261574c Add gossip behavior back to the coordinator 2025-01-03 14:00:20 -05:00
Luke Parker
3f3b0255f8 Tweak heartbeat task to run less often if there's no progress to be made 2025-01-03 13:59:14 -05:00
Luke Parker
5fc8500f8d Add task to heartbeat a tributary to the P2P code 2025-01-03 13:04:27 -05:00
Luke Parker
49c221cca2 Restore request-response code to the coordinator 2025-01-03 13:02:50 -05:00
Luke Parker
906e2fb669 Start cosigning on Cosign or Cosigned, not just on Cosigned 2025-01-03 10:30:39 -05:00
Luke Parker
ce676efb1f cargo update 2025-01-03 07:01:06 -05:00
Luke Parker
0a611cb155 Further flesh out tributary scanning
Renames `label` to `round` since `Label` was renamed to `SigningProtocolRound`.

Adds some more context-less validation to transactions which used to be done
within the custom decode function which was simplified via the usage of borsh.

Documents in processor-messages where the Coordinator sends each of its
messages.
2025-01-03 06:57:28 -05:00
Luke Parker
bcd3f14f4f Start work on cleaning up the coordinator's tributary handling 2025-01-02 09:11:04 -05:00
Luke Parker
6272c40561 Restore block_hash to Batch
It's not only helpful (to easily check where Serai's view of the external
network is) but it's necessary in case of a non-trivial chain fork to determine
which blockchain Serai considers canonical.
2024-12-31 18:10:47 -05:00
Luke Parker
2240a50a0c Rebroadcast cosigns for the currently evaluated session, not the latest intended
If Substrate has a block 500 with a key gen, and a block 600 with a key gen,
and the session starting on 500 never cosigns everything, everyone up-to-date
will want the cosigns for the session starting on block 500. Everyone
up-to-date will also be rebroadcasting the non-existent cosigns for the session
which has yet to start. This wouldn't cause a stall as eventually, each
individual set would cosign the latest notable block, and then that would be
explicitly synced, but it's still not the intended behavior.

We also won't even intake the cosigns for the latest intended session if it
exceeds the session we're currently evaluating. This does mean those behind on
the cosigning protocol wouldn't have rebroadcasted their historical cosigns,
and now will, but that's valuable as we don't actually know if we're behind or
up-to-date (per above posited issue).
2024-12-31 17:17:12 -05:00
Luke Parker
7e2b31e5da Clean the transaction definitions in the coordinator
Moves to borsh for serialization. No longer includes nonces anywhere in the TX.
2024-12-31 12:14:32 -05:00
Luke Parker
8c9441a1a5 Redo coordinator's Substrate scanner 2024-12-31 10:37:19 -05:00
Luke Parker
5a42f66dc2 alloy 0.9 2024-12-30 11:09:09 -05:00
Luke Parker
b584a2beab Remove old DB entry from the scanner
We read from it but never writ to it.

It was used to check we didn't flag a block as notable after reporting it, but
it was called by the scan task as it scanned a block. We only batch/report
blocks after the scan task after scanning them, so it was very redundant.
2024-12-30 11:07:05 -05:00
Luke Parker
26ccff25a1 Split reporting Batches to the signer from the Batch test 2024-12-30 11:03:52 -05:00
Luke Parker
f0094b3c7c Rename Report task to Batch task 2024-12-30 10:49:35 -05:00
Luke Parker
458f4fe170 Move where we check if we should delay reporting of Batches 2024-12-30 10:18:38 -05:00
Luke Parker
1de8136739 Remove Session from VariantSignId::SlashReport
It's only there to make the VariantSignid unique across Sessions. By localizing
the VariantSignid to a Session, we avoid this, and can better ensure we don't
queue work for historic sessions.
2024-12-30 06:16:03 -05:00
Luke Parker
445c49f030 Have the scanner's report task ensure handovers only occur if Batchs are valid
This is incomplete at this time. The logic is fine, but needs to be moved to a
distinct location to handle singular blocks which produce multiple Batches.
2024-12-30 06:11:47 -05:00
Luke Parker
5b74fc8ac1 Merge ExternalKeyForSessionToSignBatch into InfoForBatch 2024-12-30 05:34:13 -05:00
Luke Parker
e67e301fc2 Have the processor verify the published Batches match expectations 2024-12-30 05:21:26 -05:00
Luke Parker
1d50792eed Document serai-db with bounds and intent 2024-12-26 02:35:32 -05:00
Luke Parker
9c92709e62 Delay cosign acknowledgments 2024-12-26 01:04:20 -05:00
Luke Parker
3d15710a43 Only check the cosign is after its start block if faulty
We don't have consensus on the session's last block, so we shouldn't check if
the cosign is before the session ends. What matters is that network, within its
set, claims it's still active at that block (on its view of the blockchain).
2024-12-26 00:26:48 -05:00
Luke Parker
df06da5552 Only check if the cosign is stale if it isn't faulty
If it is faulty, we want to archive it regardless.
2024-12-26 00:24:48 -05:00
Luke Parker
cef5bc95b0 Revert prior commit
An archive of all GlobalSessions is necessary to check for faults. The storage
cost is also minimal. While it should be avoided if it can be, it can't be
here.
2024-12-26 00:15:49 -05:00
Luke Parker
f336ab1ece Remove GlobalSessions DB entry
If we read the currently-being-evaluated session from the evaluator, we can
avoid paying the storage costs on all sessions ad-infinitum.
2024-12-25 23:57:51 -05:00
Luke Parker
2aebfb21af Remove serai from the cosign evaluator 2024-12-25 23:47:21 -05:00
Luke Parker
56af6c44eb Remove usage of serai from intake_cosign 2024-12-25 21:19:04 -05:00
Luke Parker
4b34be05bf rocksdb 0.23 2024-12-25 19:48:48 -05:00
Luke Parker
5b337c3ce8 Prevent a malicious validator set from overwriting a notable cosign
Also prevents panics from an invalid Serai node (removing the assumption of an
honest Serai node).
2024-12-25 02:11:05 -05:00
Luke Parker
e119fb4c16 Replace Cosigns by extending NetworksLatestCosignedBlock
Cosigns was an archive of every single cosign ever received. By scoping
NetworksLatestCosignedBlock to be by the global session, we have the latest
cosign for each network in a session (valid to replace all prior cosigns by
that network within that session, even for the purposes of fault) and
automatically have the notable cosigns indexed (as they are the latest ones
within their session). This not only saves space yet also allows optimizing
evaluation a bit.
2024-12-25 01:45:37 -05:00
Luke Parker
ef972b2658 Add cosign signature verification 2024-12-25 00:06:46 -05:00
Luke Parker
4de1a5804d Dedicated library for intending and evaluating cosigns
Not only cleans the existing cosign code but enables non-Serai-coordinators to
evaluate cosigns if they gain access to a feed of them (such as over an RPC).
This would let centralized services not only track the finalized chain yet the
cosigned chain without directly running a coordinator.

Still being wrapped up.
2024-12-22 06:41:55 -05:00
Luke Parker
147a6e43d0 Split task from serai-processor-primitives into serai-task 2024-12-19 10:08:13 -05:00
Luke Parker
066aa9eda4 cargo update
Resolves RUSTSEC-2024-0421
2024-12-12 00:45:19 -05:00
Luke Parker
9593a428e3 alloy 0.8 2024-12-11 01:02:58 -05:00
Luke Parker
5b3c5ec02b Basic Ethereum escapeHatch test 2024-12-09 02:00:17 -05:00
Luke Parker
9ccfa8a9f5 Fix deny 2024-12-08 22:01:43 -05:00
Luke Parker
18897978d0 thiserror 2.0, cargo update 2024-12-08 21:55:37 -05:00
Luke Parker
3192370484 Add Serai key confirmation to prevent rotating to an unusable key
Also updates alloy to the latest version
2024-12-08 20:42:37 -05:00
Luke Parker
8013c56195 Add/correct msrv labels 2024-12-08 18:27:15 -05:00
Luke Parker
834c16930b Add a bitmask of OutInstruction events to Executed
Allows explorers to provide clarity on what occurred.
2024-11-02 21:00:01 -04:00
Luke Parker
2920987173 Add a re-entrancy guard to Router.execute 2024-11-02 20:12:48 -04:00
Luke Parker
26230377b0 Define IRouterWithoutCollisions which Router inherits from
This ensures Router implements most of IRouterWithoutCollisions. It solely
leaves us to confirm Router implements the extensions defined in IRouter.
2024-11-02 19:10:39 -04:00
Luke Parker
2f5c0c68d0 Add selector collisions to Router to make it IRouter compatible 2024-11-02 18:13:02 -04:00
Luke Parker
8de42cc2d4 Add IRouter 2024-11-02 13:19:07 -04:00
Luke Parker
cf4123b0f8 Update how signatures are handled by the Router 2024-11-02 10:47:09 -04:00
Luke Parker
6a520a7412 Work on testing the Router 2024-10-31 02:23:59 -04:00
Luke Parker
b2ec58a445 Update serai-ethereum-processor to compile 2024-10-30 21:48:40 -04:00
Luke Parker
8e800885fb Simplify deterministic signing process in serai-processor-ethereum-primitives
This should be easier to specify/do an alternative implementation of.
2024-10-30 21:36:31 -04:00
Luke Parker
2a427382f1 Natspec, slither Deployer, Router 2024-10-30 21:35:43 -04:00
Luke Parker
ce1689b325 Expand tests for ethereum-schnorr-contract 2024-10-28 18:08:31 -04:00
Luke Parker
0b61a75afc Add lint against string slicing
These are tricky as it panics if the slice doesn't hit a UTF-8 codepoint
boundary.
2024-10-02 21:58:48 -04:00
Luke Parker
2aee21e507 Fix decomposition -> divisor points vartime due to branch prediction/cache rules 2024-09-29 04:19:16 -04:00
Luke Parker
b3e003bd5d cargo +nightly fmt 2024-09-25 10:22:49 -04:00
Luke Parker
251a6e96e8 Constant-time divisors (#617)
* WIP constant-time implementation of the ec-divisors library

* Fix misc logic errors in poly.rs

* Remove accidentally committed test statements

* Fix ConstantTimeEq for CoefficientIndex

* Correct the iterations formula

x**3 / (0 y + x**1) would prior be considered indivisible with iterations = 0.
It is divisible however. The amount of iterations should be the amount of
coefficients within the numerator *excluding the coefficient for y**0 x**0*.

* Poly PartialEq, conditional_select_poly which checks poly structure equivalence

If the first passed argument is smaller than the latter, it's padded to the
necessary length.

Also adds code to trim the remainder as the remainder is the value modulo, so
it's very important it remains concise and workable.

* Fix the line function

It selected the case if both were identity before selecting the case if either
were identity, the latter overwriting the former.

* Final fixes re: ct_get

1) Our quotient structure does need to be of size equal to the numerator
   entirely to prevent out-of-bounds reads on it
2) We need to get from yx_coefficients if of length >=, so if the length is 1
   we can read y_pow=1 from it. If y_pow=0, and its length is 0 so it has no
   inner Vecs, we need to fall back with the guard y_pow != 0.

* Add a trim algorithm to lib.rs to prevent Polys from becoming unbearably gigantic

Our Poly algorithm is incredibly leaky. While it presumably should be improved,
we can take advantage of our known structure while constructing divisors (and
the small modulus) to simply trim out the zero coefficients leaked. This
maintains Polys in a manageable size.

* Move constant-time scalar mul gadget divisor creation from dkg to ec-divisors

Anyone creating a divisor for the scalar mul gadget should use constant time
code, so this code should at least be in the EC gadgets crate It's of
non-trivial complexity to deal with otherwise.

* Remove unsafe, cache timing attacks from ec-divisors
2024-09-24 17:27:05 -04:00
Luke Parker
2c8af04781 machete, drain > mem::swap for clarity reasons 2024-09-19 23:36:32 -07:00
Luke Parker
a0ed043372 Move old processor/src directory to processor/TODO 2024-09-19 23:36:32 -07:00
Luke Parker
2984d2f8cf Misc comments 2024-09-19 23:36:32 -07:00
Luke Parker
554c5778e4 Don't track deployment block in the Router
This technically has a TOCTOU where we sync an Epoch's metadata (signifying we
did sync to that point), then check if the Router was deployed, yet at that
very moment the node resets to genesis. By ensuring the Router is deployed, we
avoid this (and don't need to track the deployment block in-contract).

Also uses a JoinSet to sync the 32 blocks in parallel.
2024-09-19 23:36:32 -07:00
Luke Parker
7e4c59a0a3 Have the Router track its deployment block
Prevents a consensus split where some nodes would drop transfers if their node
didn't think the Router was deployed, and some would handle them.
2024-09-19 23:36:32 -07:00
Luke Parker
294462641e Don't have the ERC20 collapse the top-level transfer ID to the transaction ID
Uses the ID of the transfer event associated with the top-level transfer.
2024-09-19 23:36:32 -07:00
Luke Parker
ae76749513 Transfer ETH with CREATE, not prior to CREATE
Saves a few thousand gas.
2024-09-19 23:36:32 -07:00
Luke Parker
1e1b821d34 Report a Change Output with every Eventuality to ensure we don't fall out of synchrony 2024-09-19 23:36:32 -07:00
Luke Parker
702b4c860c Add dummy fee values to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bc1bbf9951 Set a fixed fee transferred to the caller for publication
Avoids the risk of the gas used by the contract exceeding the gas presumed to
be used (causing an insolvency).
2024-09-19 23:36:32 -07:00
Luke Parker
ec9211fd84 Remove accidentally included bitcoin feature from processor-bin 2024-09-19 23:36:32 -07:00
Luke Parker
4292660eda Have the Ethereum scheduler create Batches as necessary
Also introduces the fee logic, despite it being stubbed.
2024-09-19 23:36:32 -07:00
Luke Parker
8ea5acbacb Update the Router smart contract to pay fees to the caller
The caller is paid a fixed fee per unit of gas spent. That arguably
incentivizes the publisher to raise the gas used by internal calls, yet this
doesn't effect the user UX as they'll have flatly paid the worst-case fee
already. It does pose a risk where callers are arguably incentivized to cause
transaction failures which consume all the gas, not just increased gas, yet:

1) Modern smart contracts don't error by consuming all the gas
2) This is presumably infeasible
3) Even if it was feasible, the gas fees gained presumably exceed the gas fees
   spent causing the failure

The benefit to only paying the callers for the gas used, not the gas alotted,
is it allows Serai to build up a buffer. While this should be minor, a few
cents on every transaction at best, if we ever do have any costs slip through
the cracks, it ideally is sufficient to handle those.
2024-09-19 23:36:32 -07:00
Luke Parker
1b1aa74770 Correct forge fmt config 2024-09-19 23:36:32 -07:00
Luke Parker
861a8352e5 Update to the latest bitcoin-serai 2024-09-19 23:36:32 -07:00
Luke Parker
e64827b6d7 Mark files in TODO/ with "TODO" to ensure it pops up on search 2024-09-19 23:36:32 -07:00
Luke Parker
c27aaf8658 Merge BlockWithAcknowledgedBatch and BatchWithoutAcknowledgeBatch
Offers a simpler API to the coordinator.
2024-09-19 23:36:32 -07:00
Luke Parker
53567e91c8 Read NetworkId from ScannerFeed trait, not env 2024-09-19 23:36:32 -07:00
Luke Parker
1a08d50e16 Remove unused code in the Ethereum processor 2024-09-19 23:36:32 -07:00
Luke Parker
855e53164e Finish Ethereum ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
1367e41510 Add hooks to the main loop
Lets the Ethereum processor track the first key set as soon as it's set.
2024-09-19 23:36:32 -07:00
Luke Parker
a691be21c8 Call tidy_keys upon queue_key
Prevents the potential case of the substrate task and the scan task writing to
the same storage slot at once.
2024-09-19 23:36:32 -07:00
Luke Parker
673cf8fd47 Pass the latest active key to the Block's scan function
Effectively necessary for networks on which we utilize account abstraction in
order to know what key to associate the received coins with.
2024-09-19 23:36:32 -07:00
Luke Parker
118d81bc90 Finish the Ethereum TX publishing code 2024-09-19 23:36:32 -07:00
Luke Parker
e75c4ec6ed Explicitly add an unspendable script path to the processor's generated keys 2024-09-19 23:36:32 -07:00
Luke Parker
9e628d217f cargo fmt, move ScannerFeed from String to the RPC error 2024-09-19 23:36:32 -07:00
Luke Parker
a717ae9ea7 Have the TransactionPublisher build a TxLegacy from Transaction 2024-09-19 23:36:32 -07:00
Luke Parker
98c3f75fa2 Move the Ethereum Action machine to its own file 2024-09-19 23:36:32 -07:00
Luke Parker
18178f3764 Add note on the returned top-level transfers being unordered 2024-09-19 23:36:32 -07:00
Luke Parker
bdc3bda04a Remove ethereum-serai/serai-processor-ethereum-contracts
contracts was smashed out of ethereum-serai. Both have now been smashed into
individual crates.

Creates a TODO directory with left-over test code yet to be moved.
2024-09-19 23:36:32 -07:00
Luke Parker
433beac93a Ethereum SignableTransaction, Eventuality 2024-09-19 23:36:32 -07:00
Luke Parker
8f2a9301cf Don't have the router drop transactions which may have top-level transfers
The router will now match the top-level transfer so it isn't used as the
justification for the InInstruction it's handling. This allows the theoretical
case where a top-level transfer occurs (to any entity) and an internal call
performs a transfer to Serai.

Also uses a JoinSet for fetching transactions' top-level transfers in the ERC20
crate. This does add a dependency on tokio yet improves performance, and it's
scoped under serai-processor (which is always presumed to be tokio-based).
While we could instead import futures for join_all,
https://github.com/smol-rs/futures-lite/issues/6 summarizes why that wouldn't
be a good idea. While we could prefer async-executor over tokio's JoinSet,
JoinSet doesn't share the same issues as FuturesUnordered. That means our
question is solely if we want the async-executor executor or the tokio
executor, when we've already established the Serai processor is always presumed
to be tokio-based.
2024-09-19 23:36:32 -07:00
Luke Parker
d21034c349 Add calls to get the messages to sign for the router 2024-09-19 23:36:32 -07:00
Luke Parker
381495618c Trim dead code 2024-09-19 23:36:32 -07:00
Luke Parker
ee0efe7cde Don't have the Deployer store the deployment block
Also updates how re-entrancy is handled to a more efficient and portable
mechanism.
2024-09-19 23:36:32 -07:00
Luke Parker
7feb7aed22 Hash the message before the challenge function in the Schnorr contract
Slightly more efficient.
2024-09-19 23:36:32 -07:00
Luke Parker
cc75a92641 Smash out the router library 2024-09-19 23:36:32 -07:00
Luke Parker
a7d5640642 Smash ERC20 into its own library 2024-09-19 23:36:32 -07:00
Luke Parker
ae61f3d359 forge fmt 2024-09-19 23:36:32 -07:00
Luke Parker
4bcea31c2a Break Ethereum Deployer into crate 2024-09-19 23:36:32 -07:00
Luke Parker
eb9bce6862 Remove OutInstruction's data field
It makes sense for networks which support arbitrary data to do as part of their
address. This reduces the ability to perform DoSs, achieves better performance,
and better uses the type system (as now networks we don't support data on don't
have a data field).

Updates the Ethereum address definition in serai-client accordingly
2024-09-19 23:36:32 -07:00
Luke Parker
39be23d807 Remove artifacts for serai-processor-ethereum-contracts 2024-09-19 23:36:32 -07:00
Luke Parker
3f0f4d520d Remove the Sandbox contract
If instead of intaking calls, we intake code, we can deploy a fresh contract
which makes arbitrary calls *without* attempting to build our abstraction
layer over the concept.

This should have the same gas costs, as we still have one contract deployment.
The new contract only has a constructor, so it should have no actual code and
beat the Sandbox in that regard? We do have to call into ourselves to meter the
gas, yet we already had to call into the deployed Sandbox to achieve that.

Also re-defines the OutInstruction to include tokens, implements
OutInstruction-specified gas amounts, bumps the Solidity version, and other
such misc changes.
2024-09-19 23:36:32 -07:00
Luke Parker
80ca2b780a Add tests for the premise of the Schnorr contract to the Schnorr crate 2024-09-19 23:36:32 -07:00
Luke Parker
0813351f1f OUT_DIR > artifacts 2024-09-19 23:36:32 -07:00
Luke Parker
a38d135059 rust-toolchain 1.81 2024-09-19 23:36:32 -07:00
Luke Parker
67f9f76fdf Remove publish = false 2024-09-19 23:36:32 -07:00
Luke Parker
1c5bc2259e Dedicated crate for the Schnorr contract 2024-09-19 23:36:32 -07:00
Luke Parker
bdf89f5350 Add dedicated crate for building Solidity contracts 2024-09-19 23:36:32 -07:00
Luke Parker
239127aae5 Add crate for the Ethereum contracts 2024-09-19 23:36:32 -07:00
Luke Parker
d9543bee40 Move ethereum-serai under the processor
It isn't generally usable and should be directly integrated at this point.
2024-09-19 23:36:32 -07:00
Luke Parker
8746b54a43 Don't use a different address for DAI in test
anvil will let us deploy to the existing address.
2024-09-19 23:36:32 -07:00
Luke Parker
7761798a78 Outline the Ethereum processor
This was only half-finished to begin with, unfortunately...
2024-09-19 23:36:32 -07:00
Luke Parker
72a18bf8bb Smart Contract Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0616085109 Monero Planner
Finishes the Monero processor.
2024-09-19 23:36:32 -07:00
Luke Parker
e23176deeb Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-19 23:36:32 -07:00
Luke Parker
5551521e58 Tighten documentation on Block::number 2024-09-19 23:36:32 -07:00
Luke Parker
a2d9aeaed7 Stub out Scheduler in the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
e1ad897f7e Allow scheduler's creation of transactions to be async and error
I don't love this, but it's the only way to select decoys without using a local
database. While the prior commit added such a databse, the performance of it
presumably wasn't viable, and while TODOs marked the needed improvements, it
was still messy with an immense scope re: any auditing.

The relevant scheduler functions now take `&self` (intentional, as all
mutations should be via the `&mut impl DbTxn` passed). The calls to `&self` are
expected to be completely deterministic (as usual).
2024-09-19 23:36:32 -07:00
Luke Parker
2edc2f3612 Add a database of all Monero outs into the processor
Enables synchronous transaction creation (which requires synchronous decoy
selection).
2024-09-19 23:36:32 -07:00
Luke Parker
e56af7fc51 Monero time_for_block, dust 2024-09-19 23:36:32 -07:00
Luke Parker
947e1067d9 Monero Processor scan, check_for_eventuality_resolutions 2024-09-19 23:36:32 -07:00
Luke Parker
b4e94f3d51 cargo fmt signers/scanner 2024-09-19 23:36:32 -07:00
Luke Parker
1b39138472 Define subaddress indexes to use
(1, 0) is the external address. (2, *) are the internal addresses.
2024-09-19 23:36:32 -07:00
Luke Parker
e78236276a Remove async-trait from processor/
Part of https://github.com/serai-dex/issues/607.
2024-09-19 23:36:32 -07:00
Luke Parker
2c4c33e632 Misc continuances on the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
02409c5735 Correct Multisig Rotation to use WINDOW_LENGTH where proper 2024-09-19 23:36:32 -07:00
Luke Parker
f2cf03cedf Monero processor primitives 2024-09-19 23:36:32 -07:00
Luke Parker
0d4c8cf032 Use a local DB channel for sending to the message-queue
The provided message-queue queue functions runs unti it succeeds. This means
sending to the message-queue will no longer potentially block for arbitrary
amount of times as sending messages is just writing them to a DB.
2024-09-19 23:36:32 -07:00
Luke Parker
b6811f9015 serai-processor-bin
Moves the coordinator loop out of serai-bitcoin-processor, completing it.

Fixes a potential race condition in the message-queue regarding multiple
sockets sending messages at once.
2024-09-19 23:36:32 -07:00
Luke Parker
fcd5fb85df Add binary search to find the block to start scanning from 2024-09-19 23:36:32 -07:00
Luke Parker
3ac0265f07 Add section documenting the safety of txindex upon reorganizations 2024-09-19 23:36:32 -07:00
Luke Parker
9b8c8f8231 Misc tidying of serai-db calls 2024-09-19 23:36:32 -07:00
Luke Parker
59fa49f750 Continue filling out main loop
Adds generics to the db_channel macro, fixes the bug where it needed at least
one key.
2024-09-19 23:36:32 -07:00
Luke Parker
723f529659 Note better message structure in messages 2024-09-19 23:36:32 -07:00
Luke Parker
73af09effb Add note to signers on reducing disk IO 2024-09-19 23:36:32 -07:00
Luke Parker
4054e44471 Start on the new processor main loop 2024-09-19 23:36:32 -07:00
Luke Parker
a8159e9070 Bitcoin Key Gen 2024-09-19 23:36:32 -07:00
Luke Parker
b61ba9d1bb Adjust Bitcoin processor layout 2024-09-19 23:36:32 -07:00
Luke Parker
776cbbb9a4 Misc changes in response to prior two commits 2024-09-19 23:36:32 -07:00
Luke Parker
76a3f3ec4b Add an anyone-can-pay output to every Bitcoin transaction
Resolves #284.
2024-09-19 23:36:32 -07:00
Luke Parker
93c7d06684 Implement presumed_origin
Before we yield a block for scanning, we save all of the contained script
public keys. Then, when we want the address credited for creating an output,
we read the script public key of the spent output from the database.

Fixes #559.
2024-09-19 23:36:32 -07:00
Luke Parker
4cb838e248 Bitcoin processor lib.rs -> main.rs 2024-09-19 23:36:32 -07:00
Luke Parker
c988b7cdb0 Bitcoin TransactionPublisher 2024-09-19 23:36:32 -07:00
Luke Parker
017aab2258 Satisfy Scheduler for Bitcoin 2024-09-19 23:36:32 -07:00
Luke Parker
ba3a6f9e91 Bitcoin ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
e36b671f37 Remove bound that WINDOW_LENGTH < CONFIRMATIONS
It's unnecessary and not valuable.
2024-09-19 23:36:32 -07:00
Luke Parker
2d4b775b6e Add bitcoin Block trait impl 2024-09-19 23:36:32 -07:00
Luke Parker
247cc8f0cc Bitcoin Output/Transaction definitions 2024-09-19 23:36:32 -07:00
Luke Parker
0ccf71df1e Remove old signer impls 2024-09-19 23:36:32 -07:00
Luke Parker
8aba71b9c4 Add CosignerTask to signers, completing it 2024-09-19 23:36:32 -07:00
Luke Parker
46c12c0e66 SlashReport signing and signature publication 2024-09-19 23:36:32 -07:00
Luke Parker
3cc7b49492 Strongly type SlashReport, populate cosign/slash report tasks with work 2024-09-19 23:36:32 -07:00
Luke Parker
0078858c1c Tidy messages, publish all Batches to the coordinator
Prior, we published SignedBatches, yet Batches are necessary for auditing
purposes.
2024-09-19 23:36:32 -07:00
Luke Parker
a3cb514400 Have the coordinator task publish Batches 2024-09-19 23:36:32 -07:00
Luke Parker
ed0221d804 Add BatchSignerTask
Uses a wrapper around AlgorithmMachine Schnorrkel to let the message be &[].
2024-09-19 23:36:32 -07:00
Luke Parker
4152bcacb2 Replace scanner's BatchPublisher with a pair of DB channels 2024-09-19 23:36:32 -07:00
Luke Parker
f07ec7bee0 Route the coordinator, fix race conditions in the signers library 2024-09-19 23:36:32 -07:00
Luke Parker
7484eadbbb Expand task management
These extensions are necessary for the signers task management.
2024-09-19 23:36:32 -07:00
Luke Parker
59ff944152 Work on the higher-level signers API 2024-09-19 23:36:32 -07:00
Luke Parker
8f848b1abc Tidy transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
100c80be9f Finish transaction signing task with TX rebroadcast code 2024-09-19 23:36:32 -07:00
Luke Parker
a353f9e2da Further work on transaction signing 2024-09-19 23:36:32 -07:00
Luke Parker
b62fc3a1fa Minor work on the transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
8380653855 Add empty serai-processor-signers library
This will replace the signers still in the monolithic Processor binary.
2024-09-19 23:36:32 -07:00
Luke Parker
b50b889918 Split processor into bitcoin-processor, ethereum-processor, monero-processor 2024-09-19 23:36:32 -07:00
Luke Parker
d570c1d277 Move additional_key.rs to serai-processor-view-keys
I don't love this. I wanted to simply add this function to `processor/key-gen`,
but then anyone who wants a view key needs to pull in Bulletproofs which is a
mess of code. They'd also be subject to an AGPL licensed library.

This is so small it should be a primitive elsewhere, yet there is no primitives
library eligible. Maybe serai-client since that has the code to make
transactions to Serai (and will have this as a dependency)? Except then the
processor has to import serai-client when this rewrite removed it as a
dependency.
2024-09-19 23:36:32 -07:00
Luke Parker
2da24506a2 Remove vast swaths of legacy code in the processor 2024-09-19 23:36:32 -07:00
Luke Parker
6e9cb74022 Add non-transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0c1aec29bb Finish routing output flushing
Completes the transaction-chaining scheduler.
2024-09-19 23:36:32 -07:00
Luke Parker
653ead1e8c Finish the tree logic in the transaction-chaining scheduler
Also completes the DB functions, makes Scheduler never instantiated, and
ensures tree roots have change outputs.
2024-09-19 23:36:32 -07:00
Luke Parker
8ff019265f Near-complete version of the tree algorithm in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0601d47789 Work on the tree logic in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
ebef38d93b Ensure the transaction-chaining scheduler doesn't accumulate the same output multiple times 2024-09-19 23:36:32 -07:00
Luke Parker
75b4707002 Add input aggregation in the transaction-chaining scheduler
Also handles some other misc in it.
2024-09-19 23:36:32 -07:00
Luke Parker
3c787e005f Fix bug in the scanner regarding forwarded output amounts
We'd report the amount originally received, minus 2x the cost to aggregate,
regardless the amount successfully forwarded. We should've reduced to the
amount successfully forwarded, if it was smaller, in case the cost to
forward exceeded the aggregation cost.
2024-09-19 23:36:32 -07:00
Luke Parker
f11a6b4ff1 Better document the forwarded output flow 2024-09-19 23:36:32 -07:00
Luke Parker
fadc88d2ad Add scheduler-primitives
The main benefit is whatever scheduler is in use, we now have a single API to
receive TXs to sign (which is of value to the TX signer crate we'll inevitably
build).
2024-09-19 23:36:32 -07:00
Luke Parker
c88ebe985e Outline of the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
6deb60513c Expand primitives/scanner with niceties needed for the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bd277e7032 Add processor/scheduler/utxo/primitives
Includes the necessary signing functions and the fee amortization logic.

Moves transaction-chaining to utxo/transaction-chaining.
2024-09-19 23:36:32 -07:00
Luke Parker
fc765bb9e0 Add crate for the transaction-chaining Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
13b74195f7 Don't have acknowledge_batch immediately run
`acknowledge_batch` can only be run if we know what the Batch should be. If we
don't know what the Batch should be, we have to block until we do.
Specifically, we need the block number associated with the Batch.

Instead of blocking over the Scanner API, the Scanner API now solely queues
actions. A new task intakes those actions once we can. This ensures we can
intake the entire Substrate chain, even if our daemon for the external network
is stalled at its genesis block.

All of this for the block number alone seems ridiculous. To go from the block
hash in the Batch to the block number without this task, we'd at least need the
index task to be up to date (still requiring blocking or an API returning
ephemeral errors).
2024-09-19 23:36:32 -07:00
Luke Parker
f21838e0d5 Replace acknowledge_block with acknowledge_batch 2024-09-19 23:36:32 -07:00
Luke Parker
76cbe6cf1e Have acknowledge_block take in the results of the InInstructions executed
If any failed, the scanner now creates a Burn for the return.
2024-09-19 23:36:32 -07:00
Luke Parker
5999f5d65a Route the DB w.r.t. forwarded outputs' information 2024-09-19 23:36:32 -07:00
Luke Parker
d429a0bae6 Remove unused ID -> number lookup 2024-09-19 23:36:32 -07:00
Luke Parker
775824f373 Impl ScanData serialization in the DB 2024-09-19 23:36:32 -07:00
Luke Parker
41a74cb513 Check a queued key has never been queued before
Re-queueing should only happen with a malicious supermajority and breaks
indexing by the key.
2024-09-19 23:36:32 -07:00
Luke Parker
e26da1ec34 Have the Eventuality task drop outputs which aren't ours and aren't worth it to aggregate
We could drop these entirely, yet there's some degree of utility to be able to
add coins to Serai in this manner.
2024-09-19 23:36:32 -07:00
Luke Parker
7266e7f7ea Add note on why LifetimeStage is monotonic 2024-09-19 23:36:32 -07:00
Luke Parker
a8b9b7bad3 Add sanity checks we haven't prior reported an InInstruction for/accumulated an output 2024-09-19 23:36:32 -07:00
Luke Parker
2ca7fccb08 Pass the lifetime information to the scheduler
Enables it to decide which keys to use for fulfillment/change.
2024-09-19 23:36:32 -07:00
Luke Parker
4f6d91037e Call flush_key 2024-09-19 23:36:32 -07:00
Luke Parker
8db76ed67c Add key management to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
920303e1b4 Add helper to intake Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
9f4b28e5ae Clarify output-to-self to output-to-Serai
There's only the requirement it's to an active key which is being reported for.
2024-09-19 23:36:32 -07:00
Luke Parker
f9d02d43c2 Route burns through the scanner 2024-09-19 23:36:32 -07:00
Luke Parker
8ac501028d Add API to publish Batches with
This doesn't have to be abstract, we can generate the message and use the
message-queue API, yet this should help with testing.
2024-09-19 23:36:32 -07:00
Luke Parker
612c67c537 Cache the cost to aggregate 2024-09-19 23:36:32 -07:00
Luke Parker
04a971a024 Fill in various DB functions 2024-09-19 23:36:32 -07:00
Luke Parker
738636c238 Have Scanner::new spawn tasks 2024-09-19 23:36:32 -07:00
Luke Parker
65f3f48517 Add ReportDb 2024-09-19 23:36:32 -07:00
Luke Parker
7cc07d64d1 Make report.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
fdfe520f9d Add ScanDb 2024-09-19 23:36:32 -07:00
Luke Parker
77ef25416b Make scan.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7c1025dbcb Implement key retiry 2024-09-19 23:36:32 -07:00
Luke Parker
a771fbe1c6 Logs, documentation, misc 2024-09-19 23:36:32 -07:00
Luke Parker
9cebdf7c68 Add sorts for safety even upon non-determinism 2024-09-19 23:36:32 -07:00
Luke Parker
75251f04b4 Use a channel for the InInstructions
It's still unclear how we'll handle refunding failed InInstructions at this
time. Presumably, extending the InInstruction channel with the associated
output ID?
2024-09-19 23:36:32 -07:00
Luke Parker
6196642beb Add a DbChannel between scan and eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
2bddf00222 Don't expose IndexDb throughout the crate 2024-09-19 23:36:32 -07:00
Luke Parker
9ab8ba0215 Add dedicated Eventuality DB and stub missing fns 2024-09-19 23:36:32 -07:00
Luke Parker
33e0c85f34 Make Eventuality a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
1e8f4e6156 Make a dedicated IndexDb 2024-09-19 23:36:32 -07:00
Luke Parker
66f3428051 Make index a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7e71840822 Add helper methods
Has fetched blocks checked to be the indexed blocks. Has scanned outputs be
sorted, meaning they aren't subject to implicit order/may be non-deterministic
(such as if handled by a threadpool).
2024-09-19 23:36:32 -07:00
Luke Parker
b65dbacd6a Move ContinuallyRan into primitives
I'm unsure where else it'll be used within the processor, yet it's generally
useful and I don't want to make a dedicated crate yet.
2024-09-19 23:36:32 -07:00
Luke Parker
2fcd9530dd Add a callback to accumulate outputs and return the new Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
379780a3c9 Flesh out eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
945f31dfc7 Have the scan flag blocks with change/branch/forwarded as notable 2024-09-19 23:36:32 -07:00
Luke Parker
d5d1fc3eea Flesh out report task 2024-09-19 23:36:32 -07:00
Luke Parker
fd12cc0213 Finish scan task 2024-09-19 23:36:32 -07:00
Luke Parker
ce805c8cc8 Correct compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
bc0cc5a754 Decide flow between scan/eventuality/report
Scan now only handles External outputs, with an associated essay going over
why. Scan directly creates the InInstruction (prior planned to be done in
Report), and Eventuality is declared to end up yielding the outputs.

That will require making the Eventuality flow two-stage. One stage to evaluate
existing Eventualities and yield outputs, and one stage to incorporate new
Eventualities before advancing the scan window.
2024-09-19 23:36:32 -07:00
Luke Parker
f2ee4daf43 Add Eventuality back to processor primitives
Also splits crate into modules.
2024-09-19 23:36:32 -07:00
Luke Parker
4e29678799 Add bounds for the eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
74d3075dae Document expectations on Eventuality task and correct code determining the block safe to scan/report 2024-09-19 23:36:32 -07:00
Luke Parker
155ad48f4c Handle dust 2024-09-19 23:36:32 -07:00
Luke Parker
951872b026 Differentiate BlockHeader from Block 2024-09-19 23:36:32 -07:00
Luke Parker
2b47feafed Correct misc compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
a2717d73f0 Flesh out new scanner a bit more
Adds the task to mark blocks safe to scan, and outlines the task to report
blocks.
2024-09-19 23:36:32 -07:00
Luke Parker
8763ef23ed Definition and delineation of tasks within the scanner
Also defines primitives for the processor.
2024-09-19 23:36:32 -07:00
Luke Parker
57a0ba966b Extend serai-db with support for generic keys/values 2024-09-19 23:36:32 -07:00
Luke Parker
e843b4a2a0 Move scanner.rs to scanner/lib.rs 2024-09-19 23:36:32 -07:00
Luke Parker
2f3bd7a02a Cleanup DB handling a bit in key-gen/attempt-manager 2024-09-19 23:36:32 -07:00
Luke Parker
1e8a9ec5bd Smash out the signer
Abstract, to be done for the transactions, the batches, the cosigns, the slash
reports, everything. It has a minimal API itself, intending to be as clear as
possible.
2024-09-19 23:36:32 -07:00
Luke Parker
2f29c91d30 Smash key-gen out of processor
Resolves some bad assumptions made regarding keys being unique or not.
2024-09-19 23:36:32 -07:00
Luke Parker
f3b91bd44f Smash key-gen into independent crate 2024-09-19 23:36:32 -07:00
Luke Parker
e4e4245ee3 One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
2024-09-19 21:43:26 -04:00
Luke Parker
669b2fef72 Remove test_tweak_keys
What it tests no longer applies since tweak_keys now introduces an unspendable
script path.
2024-09-19 21:43:00 -04:00
Luke Parker
3af430d8de Use the IETF transacript in bitcoin-serai, not RecommendedTranscript
This is more likely to be interoperable in the long term.
2024-09-19 21:13:08 -04:00
Luke Parker
dfb5a053ae Resolve #611 2024-09-19 20:58:33 -04:00
Luke Parker
bdcc061bb4 Add ScannableBlock abstraction in the RPC
Makes scanning synchronous and only error upon a malicious node/unplanned for
hard fork.
2024-09-13 04:38:49 -04:00
Luke Parker
2c7148d636 Add machete exception for monero-clsag to monero-wallet 2024-09-13 02:39:43 -04:00
Luke Parker
6b270bc6aa Remove async-trait from monero-rpc 2024-09-13 02:36:53 -04:00
Luke Parker
875c669a7a Remove monero-serai multisig for just monero-[clsag, wallet] multisig 2024-09-12 18:41:35 -04:00
Luke Parker
0d399ecb28 Remove unused error in monero-address 2024-09-12 18:41:35 -04:00
Luke Parker
88440807e1 Monero v0.18.3.4 (#605)
* Monero v0.18.3.4

* Correct `check_weight_and_fee` call

* Restore empty test files so CI isn't borked
2024-09-06 01:43:31 -04:00
Luke Parker
c1a9256cc5 dockertest 0.5, correct errors from prior update commit 2024-09-05 23:31:45 -04:00
Luke Parker
0d5756ffcf cargo update, upgrade alloy
Removes a dated proc-macro-crate patch.
2024-09-05 17:03:23 -04:00
Luke Parker
ac7b98daac Remove tokio dependency from tendermint-machine
Indirects it via a minimal wrapper which can be trivially patched.
2024-09-05 16:30:27 -04:00
Luke Parker
efc7d70ab1 Clarify when wallet2 will decrypt payment IDs with citations 2024-09-05 15:50:36 -04:00
Luke Parker
4e834873d3 Lints from latest nightly
We can't adopt it due to some issue with building the runtime, but these are
good to have.
2024-09-01 16:33:44 -04:00
akildemir
a506d74d69 move economic security into it's own pallet (#596)
* move economic security into it's own pallet

* fix deny

* Update Cargo.toml, .github for the new crates

* Remove unused import

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-31 18:55:42 -04:00
Boog900
394db44b30 Monero: fix signature hash for V1 txs (#598)
* fix signature hash for V1 txs

* fix CI
2024-08-23 20:34:54 -04:00
akildemir
a2df54dd6a merge genesis complete block with genesis ended 2024-08-15 08:15:40 -07:00
akildemir
efc45c391b update emissions pallet author email 2024-08-15 08:12:47 -07:00
akildemir
cccc1fc7e6 Implement block emissions (#551)
* add genesis liquidity implementation

* add missing deposit event

* fix CI issues

* minor fixes

* make math safer

* fix fmt

* implement block emissions

* make remove liquidity an authorized call

* implement setting initial values for coins

* add genesis liquidity test & misc fixes

* updato develop latest

* fix rotation test

* fix licencing

* add fast-epoch feature

* only create the pool when adding liquidity first time

* add initial reward era test

* test whole pre ec security emissions

* fix clippy

* add swap-to-staked-sri feature

* rebase changes

* fix tests

* Remove accidentally commited ETH ABI files

* fix some pr comments

* Finish up fixing pr comments

* exclude SRI from is_allowed check

* Misc changes

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-14 23:12:04 -04:00
akildemir
bf1c493d9a add missing prevotes (#590)
* add missing prevotes

* remove the TODO

* add missing current step checks

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
2024-08-14 15:00:48 -04:00
Luke Parker
3de1e4dee2 Remove stray file in docs/ 2024-08-05 06:52:15 -04:00
Luke Parker
2591b5ade9 Update Gemfile.lock to silence a rexml disclosure 2024-08-03 02:57:56 -04:00
akildemir
e6620963c7 update author email 2024-08-02 01:50:33 -07:00
Luke Parker
d5205ce231 Update dependencies
Resolves a yanked version of bytemuck.
2024-08-01 04:06:09 -04:00
Luke Parker
0f6878567f Remove a pair of unused structs/deps
Caught by the most recent nightly.
2024-08-01 01:36:10 -04:00
Luke Parker
880565cb81 Rust 1.80
Preserves the fn accessors within the Monero crates so that we can use statics
in some cfgs yet not all (in order to provide support for more low-memory
devices) with the exception of `H` (which truly should be cached).
2024-07-26 19:28:10 -07:00
Luke Parker
6f34c2ff77 Remove unused git allowance for monero-rs 2024-07-19 23:51:05 -04:00
akildemir
1493f49416 Implement genesis liquidity protocol (#545)
* add genesis liquidity implementation

* add missing deposit event

* fix CI issues

* minor fixes

* make math safer

* fix fmt

* make remove liquidity an authorized call

* implement setting initial values for coins

* add genesis liquidity test & misc fixes

* updato develop latest

* fix rotation test

* Finish merging develop

* Remove accidentally committed ETH files

* fix pr comments

* further bug fixes

* fix last pr comments

* tidy up

* Misc

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-07-18 19:30:19 -04:00
Luke Parker
2ccb0cd90d Correct version of ruby update is run with
Hopefully finally resolves the site build failures.
2024-07-18 16:47:59 -04:00
Luke Parker
b33a6487aa Rename DKG specified in FROST from FROST to PedPoP 2024-07-18 16:41:31 -04:00
Luke Parker
491500057b Update Ruby version used in GH workflow 2024-07-18 16:09:01 -04:00
Luke Parker
d9f85fab26 Update lockfiles
Resolves a dependabot alert about the Ruby used to generate the docs site.
2024-07-18 15:18:08 -04:00
Luke Parker
7d2d739042 Rename the coins folder to networks (#583)
* Rename the coins folder to networks

Ethereum isn't a coin. It's a network.

Resolves #357.

* More renames of coins -> networks in orchestration

* Correct paths in tests/

* cargo fmt
2024-07-18 15:16:45 -04:00
akildemir
40cc180853 add transaction and crypto unit tests 2024-07-17 16:26:31 -07:00
Luke Parker
2aac6f6998 Improve usage of constants in coordinator p2p 2024-07-17 06:54:54 -04:00
Luke Parker
149c2a4437 Use non-pruned nodes in verify-chain 2024-07-17 06:54:26 -04:00
Luke Parker
e772b8a5f7 #560 take two, now that #560 has been reverted (#561)
* Clear upons upon round, not block

* Cache the proposal for a round

* Rebase onto develop, which reverted this PR, and re-apply this PR

* Set participation upon participation instead of constantly recalculating

* Cache message instances

* Add missing txn commit

Identified by @akildemir.

* Correct clippy lint identified upon rebase

* Fix tendermint chain sync (#581)

* fix p2p Reqres protocol

* stabilize tributary chain sync

* fix pr comments

---------

Co-authored-by: akildemir <34187742+akildemir@users.noreply.github.com>
2024-07-16 19:42:15 -04:00
Luke Parker
c0200df75a Add missing feature flag to dalek-ff-group 2024-07-15 21:50:43 -04:00
Luke Parker
9955ef54a5 Apply bitcoin fee per vsize, not per weight unit
This enables more precision.
2024-07-15 17:37:04 -07:00
Luke Parker
8e7e61adbd Respect maximum amount of outs per request 2024-07-14 20:28:10 -04:00
Luke Parker
0cb24dde02 cargo update
Resolves failing deny.
2024-07-14 20:27:36 -04:00
Luke Parker
97bfb183e8 Correct typo in coordinator
Identified by akil a while ago.
2024-07-14 19:35:45 -04:00
Luke Parker
85fc31fd82 Have monero-wallet use Transaction<Pruned>, not Transaction 2024-07-14 19:30:50 -04:00
Luke Parker
7b8bcae396 Add support for pruned transactions to monero-serai 2024-07-13 00:29:02 -04:00
Luke Parker
70fe52437c Have RPC tests run sequentially
Also corrects links pointing to branches to point to commits.
2024-07-12 22:09:46 -04:00
Luke Parker
ba657e23d1 Have a public monero-rpc type be properly formatted
It was public as the raw RPC response. It's more polite to handle the
formatting in the RPC, and allows us to return a better structure.
2024-07-12 04:14:05 -04:00
Luke Parker
32c24917c4 Correct tests which should've failed to expect failures now that they fail 2024-07-12 03:09:48 -04:00
Luke Parker
4ba961b2cb Cite source for obscure wallet protocol rules 2024-07-12 02:19:21 -04:00
Luke Parker
c59be46e2f Optimize Monero BPs 2024-07-12 02:18:57 -04:00
Luke Parker
2c165e19ae Bitcoin 27.1 2024-07-12 02:18:43 -04:00
Luke Parker
ee10692b23 Fix handling of output distribution
We prior didn't handle how the output distribution only starts after a specific
block.
2024-07-11 18:06:51 -04:00
Luke Parker
7a68b065e0 Redo the Bulletproofs impl
Uses the IP-impl from the FCMP++ work.
2024-07-10 21:05:23 -04:00
Luke Parker
3ddf1eec0c Fix no-std builds for monero-wallet 2024-07-09 02:17:57 -04:00
Luke Parker
84f0e6c26e Add additional documentation 2024-07-08 20:33:00 -04:00
Luke Parker
5bb3256d1f Support subaddresses as change outputs 2024-07-08 20:00:21 -04:00
Luke Parker
774424b70b Differentiate Rpc from DecoyRpc
Enables using a locally backed decoy DB.
2024-07-08 18:14:56 -04:00
Luke Parker
ed662568e2 Clean decoy selection code 2024-07-08 02:51:06 -04:00
Luke Parker
b744ac9a76 Clean decoy selection 2024-07-08 02:38:01 -04:00
Luke Parker
d7f7f69738 Remove the DecoySelection trait 2024-07-08 00:30:42 -04:00
Luke Parker
a2c3aba82b Clean the Monero lib for auditing (#577)
* Remove unsafe creation of dalek_ff_group::EdwardsPoint in BP+

* Rename Bulletproofs to Bulletproof, since they are a single Bulletproof

Also bifurcates prove with prove_plus, and adds a few documentation items.

* Make CLSAG signing private

Also adds a bit more documentation and does a bit more tidying.

* Remove the distribution cache

It's a notable bandwidth/performance improvement, yet it's not ready. We need a
dedicated Distribution struct which is managed by the wallet and passed in.
While we can do that now, it's not currently worth the effort.

* Tidy Borromean/MLSAG a tad

* Remove experimental feature from monero-serai

* Move amount_decryption into EncryptedAmount::decrypt

* Various RingCT doc comments

* Begin crate smashing

* Further documentation, start shoring up API boundaries of existing crates

* Document and clean clsag

* Add a dedicated send/recv CLSAG mask struct

Abstracts the types used internally.

Also moves the tests from monero-serai to monero-clsag.

* Smash out monero-bulletproofs

Removes usage of dalek-ff-group/multiexp for curve25519-dalek.

Makes compiling in the generators an optional feature.

Adds a structured batch verifier which should be notably more performant.

Documentation and clean up still necessary.

* Correct no-std builds for monero-clsag and monero-bulletproofs

* Tidy and document monero-bulletproofs

I still don't like the impl of the original Bulletproofs...

* Error if missing documentation

* Smash out MLSAG

* Smash out Borromean

* Tidy up monero-serai as a meta crate

* Smash out RPC, wallet

* Document the RPC

* Improve docs a bit

* Move Protocol to monero-wallet

* Incomplete work on using Option to remove panic cases

* Finish documenting monero-serai

* Remove TODO on reading pseudo_outs for AggregateMlsagBorromean

* Only read transactions with one Input::Gen or all Input::ToKey

Also adds a helper to fetch a transaction's prefix.

* Smash out polyseed

* Smash out seed

* Get the repo to compile again

* Smash out Monero addresses

* Document cargo features

Credit to @hinto-janai for adding such sections to their work on documenting
monero-serai in #568.

* Fix deserializing v2 miner transactions

* Rewrite monero-wallet's send code

I have yet to redo the multisig code and the builder. This should be much
cleaner, albeit slower due to redoing work.

This compiles with clippy --all-features. I have to finish the multisig/builder
for --all-targets to work (and start updating the rest of Serai).

* Add SignableTransaction Read/Write

* Restore Monero multisig TX code

* Correct invalid RPC type def in monero-rpc

* Update monero-wallet tests to compile

Some are _consistently_ failing due to the inputs we attempt to spend being too
young. I'm unsure what's up with that. Most seem to pass _consistently_,
implying it's not a random issue yet some configuration/env aspect.

* Clean and document monero-address

* Sync rest of repo with monero-serai changes

* Represent height/block number as a u32

* Diversify ViewPair/Scanner into ViewPair/GuaranteedViewPair and Scanner/GuaranteedScanner

Also cleans the Scanner impl.

* Remove non-small-order view key bound

Guaranteed addresses are in fact guaranteed even with this due to prefixing key
images causing zeroing the ECDH to not zero the shared key.

* Finish documenting monero-serai

* Correct imports for no-std

* Remove possible panic in monero-serai on systems < 32 bits

This was done by requiring the system's usize can represent a certain number.

* Restore the reserialize chain binary

* fmt, machete, GH CI

* Correct misc TODOs in monero-serai

* Have Monero test runner evaluate an Eventuality for all signed TXs

* Fix a pair of bugs in the decoy tests

Unfortunately, this test is still failing.

* Fix remaining bugs in monero-wallet tests

* Reject torsioned spend keys to ensure we can spend the outputs we scan

* Tidy inlined epee code in the RPC

* Correct the accidental swap of stagenet/testnet address bytes

* Remove unused dep from processor

* Handle Monero fee logic properly in the processor

* Document v2 TX/RCT output relation assumed when scanning

* Adjust how we mine the initial blocks due to some CI test failures

* Fix weight estimation for RctType::ClsagBulletproof TXs

* Again increase the amount of blocks we mine prior to running tests

* Correct the if check about when to mine blocks on start

Finally fixes the lack of decoy candidates failures in CI.

* Run Monero on Debian, even for internal testnets

Change made due to a segfault incurred when locally testing.

https://github.com/monero-project/monero/issues/9141 for the upstream.

* Don't attempt running tests on the verify-chain binary

Adds a minimum XMR fee to the processor and runs fmt.

* Increase minimum Monero fee in processor

I'm truly unsure why this is required right now.

* Distinguish fee from necessary_fee in monero-wallet

If there's no change, the fee is difference of the inputs to the outputs. The
prior code wouldn't check that amount is greater than or equal to the necessary
fee, and returning the would-be change amount as the fee isn't necessarily
helpful.

Now the fee is validated in such cases and the necessary fee is returned,
enabling operating off of that.

* Restore minimum Monero fee from develop
2024-07-07 06:57:18 -04:00
Luke Parker
703c6a2358 Typo corrections meant to be added in the prior commit 2024-07-04 02:13:34 -04:00
Luke Parker
52bb918cc9 Ensure TotalAllocatedStake is set for the first set 2024-07-04 02:11:04 -04:00
GitHub Actions
ba244e8090 Update nightly 2024-07-02 00:43:14 -04:00
akildemir
3e99d68cfe fix total allocated stake update in wrong time (#518)
* fix total allocated stake update in wrong time

* Restore mid-set increases

* Correct typo I introduced

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-06-24 07:41:25 -04:00
akildemir
4d9c2df38c Add coordinator rotation test (#535)
* add node side unit test

* complete rotation test for all networks

* set up the fast-epoch docker file

* fix pr comments

* add coordinator side rotation test

* bug fixes

* Remove EPOCH_INTERVAL

* Minor nits

* Add note on origin of publish_tx function in tests/coordinator

* Correct ThresholdParams assert_eq

* fmt

* Correct detection of handover completion

* Restore key gen message match from develop

It was modified in response to the handover completion bug, which has now been
resolved.

* bug fixes

* Correct invalid constant

* Typo fixes

* remove selecting participant to remove at random

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-06-21 08:39:17 -04:00
Luke Parker
8ab6f9c36e alloy 0.1 2024-06-19 12:39:47 -04:00
Luke Parker
253cf3253d Correct hash for 1.79.0-slim-bookworm docker image 2024-06-13 19:00:01 -04:00
Luke Parker
03445b3020 Update httparse, as 1.9.2 was yanked 2024-06-13 16:49:58 -04:00
Luke Parker
9af111b4aa Rust 1.79, cargo update 2024-06-13 15:57:08 -04:00
Luke Parker
41ce5b1738 Use the serai_abi::Call in the actual Transaction type
We prior required they had the same encoding, yet this ensures they do by
making them one and the same. This does require an large, ugly, From/TryInto
block which is deemed preferable for moving this more and more into syntax
(from semantics).

Further improvements (notably re: Extra) is possible, and this already lets us
strip some members from the Call enum.
2024-06-03 23:38:22 -04:00
Luke Parker
2a05cf3225 June 2024 nightly update
Replaces #571.
2024-06-01 21:46:49 -04:00
Luke Parker
f4147c39b2 bitcoin 0.32.1 2024-05-31 01:02:43 -04:00
rlking
cd69f3b9d6 Check if wasm was built by container exit code and state instead of local mountpoint (#570)
* Check if the serai wasm was built successfully by verifying the build container's status code and state, instead of checking the volume mountpoint locally

* Use a log statement for which wasm is used

* Minor typo fix

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-05-25 20:33:23 -04:00
Luke Parker
1d2beb3ee4 Ethereum relayer server
Causes send test to pass for the processor.
2024-05-22 18:50:11 -04:00
Luke Parker
ac709b2945 Correct processor docker tests encoding of Bitcoin addresses in OutInstructions 2024-05-21 08:49:57 -04:00
Luke Parker
a473800c26 More aggressive cargo update
Adds a few deps which are fine. Patches an old parking_lot(_core) version.
2024-05-21 08:07:32 -04:00
Luke Parker
09aac20293 Set the BufReader capacity to 0
Fixes issues with bitcoin.

We only use a BufReader as it's the only way to use a std::io::Read generic as
a bitcoin::io::Read object.
2024-05-21 07:06:13 -04:00
Luke Parker
f93214012d Use ScriptBuf over Address where possible 2024-05-21 06:44:59 -04:00
Luke Parker
400319cd29 cargo update
Also updates our gems
2024-05-21 06:09:04 -04:00
Luke Parker
a0a7d63dad bitcoin 0.32 2024-05-21 05:27:01 -04:00
Luke Parker
fb7d12ee6e Short-circuit test_no_deadlock_in_multisig_completed if preconditions not met 2024-05-21 03:20:44 -04:00
Luke Parker
11ec9e3535 Ethereum processor docker tests, barring send
We need the TX publication relay thingy for send to work (though that is the
point the test fails at).
2024-05-21 00:29:33 -04:00
Luke Parker
ae8a27b876 Add our own alloy meta module to deduplicate alloy prefixes 2024-05-14 01:42:18 -04:00
Luke Parker
af79586488 Fill out Ethereum functions in the processor Docker tests 2024-05-14 01:33:55 -04:00
Luke Parker
d27d93480a Get processor signer/wallet tests working for Ethereum
They are handicapped by the fact Ethereum self-sends don't show up as outputs,
yet that's fundamental (unless we add a *harmful* fallback function).
2024-05-11 00:11:14 -04:00
Luke Parker
02c4417a46 Update no_deadlock_in_multisig test to set the initial key in the DB 2024-05-10 15:57:05 -04:00
Luke Parker
79a79db399 Update dockertest specification 2024-05-10 15:50:07 -04:00
Luke Parker
0c9dd5048e Processor scanner tests for Ethereum 2024-05-10 14:06:43 -04:00
Luke Parker
5501de1f3a Update to the latest alloy
Also makes various tweaks as necessary.
2024-05-10 14:06:43 -04:00
GitHub Actions
21123590bb Update nightly 2024-05-01 01:10:58 -04:00
Luke Parker
bc1dec7991 Move TRANSACTION_MESSAGE to 1 2024-04-28 04:04:53 -04:00
Luke Parker
cef63a631a Add a dev ethereum Docker setup
Also adds untested Dockerfiles for reth, lighthouse, and nimbus.
2024-04-24 09:30:54 -04:00
Luke Parker
d57fef8999 Slight documentation tweaks 2024-04-24 03:55:23 -04:00
Luke Parker
d1474e9188 Route top-level transfers through to the processor 2024-04-24 03:38:31 -04:00
Luke Parker
b39c751403 Reduce target peers a bit 2024-04-23 12:59:45 -04:00
Luke Parker
cc7202e0bf Correct recv to try_recv when exhausting channel 2024-04-23 12:40:21 -04:00
Luke Parker
19e68f7f75 Correct selection of to-try peers to prevent infinite loops when to-try < target 2024-04-23 12:04:30 -04:00
Luke Parker
d94c9a4a5e Use a constant for the target amount of peer 2024-04-23 11:59:51 -04:00
Luke Parker
43dc036660 Use a HashSet for which networks to try peer finding for
Prevents a flood of retries from individually failed attempts within a batch of
peer connection attempts.
2024-04-23 10:55:56 -04:00
Luke Parker
95591218bb Remove cbor 2024-04-23 07:01:07 -04:00
Luke Parker
7dd587a864 Inline broadcast_raw now that it doesn't have multiple callers 2024-04-23 06:44:21 -04:00
Luke Parker
023275bcb6 Properly diversify ReqResMessageKind/GossipMessageKind 2024-04-23 06:37:41 -04:00
Luke Parker
8cef9eff6f Move keep alive, heartbeat, block to request/response 2024-04-23 05:44:58 -04:00
Luke Parker
b5e22dca8f Correct no-std Monero after moving from ToString to Display 2024-04-23 05:25:08 -04:00
Luke Parker
a41329c027 Update clippy now that redundant imports has been reverted 2024-04-23 04:31:27 -04:00
Luke Parker
a25e6330bd Remove DLEq proofs from CLSAG multisig
1) Removes the key image DLEq on the Monero side of things, as the produced
   signature share serves as a DLEq for it.
2) Removes the nonce DLEqs from modular-frost as they're unnecessary for
   monero-serai. Updates documentation accordingly.

Without the proof the nonces are internally consistent, the produced signatures
from modular-frost can be argued as a batch-verifiable CP93 DLEq (R0, R1, s),
or as a GSP for the CP93 DLEq statement (which naturally produces (R0, R1, s)).

The lack of proving the nonces consistent does make the process weaker, yet
it's also unnecessary for the class of protocols this is intended to service.
To provide DLEqs for the nonces would be to provide PoKs for the nonce
commitments (in the traditional Schnorr case).
2024-04-21 23:01:32 -04:00
Luke Parker
558a2bfa46 Slight tweaks to BP+ 2024-04-21 21:51:44 -04:00
Luke Parker
c73acb3d62 Log on new tendermint message debug -> trace 2024-04-21 19:28:21 -04:00
Luke Parker
933b17aa91 Revert coordinator/tributary to fd4f247917
\#560 is causing notable CI failures, with its logs including slashes at 10x
the prior rate.
2024-04-21 10:16:12 -04:00
Luke Parker
5fa7e3d450 Line for prior commit 2024-04-21 08:55:29 -04:00
Luke Parker
749d783b1e Comment the insanely aggressive timeout future trace log 2024-04-21 08:53:35 -04:00
Luke Parker
5a3ea80943 Add missing continue to prevent dialing a node we're connected to 2024-04-21 08:36:52 -04:00
Luke Parker
fddbebc7c0 Replace expect with debug log 2024-04-21 08:02:34 -04:00
Luke Parker
e01848aa9e Correct boolean NOT on is_fresh_dial 2024-04-21 07:30:31 -04:00
Luke Parker
320b5627b5 Retry if initial dials fail, not just upon disconnect 2024-04-21 07:26:16 -04:00
Luke Parker
be7780e69d Restart coordinator peer finding upon disconnections 2024-04-21 07:02:49 -04:00
Luke Parker
0ddbaefb38 Correct timing around when we verify precommit signatures 2024-04-21 06:12:01 -04:00
Luke Parker
0f0db14f05 Ethereum Integration (#557)
* Clean up Ethereum

* Consistent contract address for deployed contracts

* Flesh out Router a bit

* Add a Deployer for DoS-less deployment

* Implement Router-finding

* Use CREATE2 helper present in ethers

* Move from CREATE2 to CREATE

Bit more streamlined for our use case.

* Document ethereum-serai

* Tidy tests a bit

* Test updateSeraiKey

* Use encodePacked for updateSeraiKey

* Take in the block hash to read state during

* Add a Sandbox contract to the Ethereum integration

* Add retrieval of transfers from Ethereum

* Add inInstruction function to the Router

* Augment our handling of InInstructions events with a check the transfer event also exists

* Have the Deployer error upon failed deployments

* Add --via-ir

* Make get_transaction test-only

We only used it to get transactions to confirm the resolution of Eventualities.
Eventualities need to be modularized. By introducing the dedicated
confirm_completion function, we remove the need for a non-test get_transaction
AND begin this modularization (by no longer explicitly grabbing a transaction
to check with).

* Modularize Eventuality

Almost fully-deprecates the Transaction trait for Completion. Replaces
Transaction ID with Claim.

* Modularize the Scheduler behind a trait

* Add an extremely basic account Scheduler

* Add nonce uses, key rotation to the account scheduler

* Only report the account Scheduler empty after transferring keys

Also ban payments to the branch/change/forward addresses.

* Make fns reliant on state test-only

* Start of an Ethereum integration for the processor

* Add a session to the Router to prevent updateSeraiKey replaying

This would only happen if an old key was rotated to again, which would require
n-of-n collusion (already ridiculous and a valid fault attributable event). It
just clarifies the formal arguments.

* Add a RouterCommand + SignMachine for producing it to coins/ethereum

* Ethereum which compiles

* Have branch/change/forward return an option

Also defines a UtxoNetwork extension trait for MAX_INPUTS.

* Make external_address exclusively a test fn

* Move the "account" scheduler to "smart contract"

* Remove ABI artifact

* Move refund/forward Plan creation into the Processor

We create forward Plans in the scan path, and need to know their exact fees in
the scan path. This requires adding a somewhat wonky shim_forward_plan method
so we can obtain a Plan equivalent to the actual forward Plan for fee reasons,
yet don't expect it to be the actual forward Plan (which may be distinct if
the Plan pulls from the global state, such as with a nonce).

Also properly types a Scheduler addendum such that the SC scheduler isn't
cramming the nonce to use into the N::Output type.

* Flesh out the Ethereum integration more

* Two commits ago, into the **Scheduler, not Processor

* Remove misc TODOs in SC Scheduler

* Add constructor to RouterCommandMachine

* RouterCommand read, pairing with the prior added write

* Further add serialization methods

* Have the Router's key included with the InInstruction

This does not use the key at the time of the event. This uses the key at the
end of the block for the event. Its much simpler than getting the full event
streams for each, checking when they interlace.

This does not read the state. Every block, this makes a request for every
single key update and simply chooses the last one. This allows pruning state,
only keeping the event tree. Ideally, we'd also introduce a cache to reduce the
cost of the filter (small in events yielded, long in blocks searched).

Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of
our Plans should solely have payments out, and there's no expectation of a Plan
being made under one key broken by it being received by another key.

* Add read/write to InInstruction

* Abstract the ABI for Call/OutInstruction in ethereum-serai

* Fill out signable_transaction for Ethereum

* Move ethereum-serai to alloy

Resolves #331.

* Use the opaque sol macro instead of generated files

* Move the processor over to the now-alloy-based ethereum-serai

* Use the ecrecover provided by alloy

* Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized)

* Always use the latest keys for SC scheduled plans

* get_eventuality_completions for Ethereum

* Finish fleshing out the processor Ethereum integration as needed for serai-processor tests

This doesn't not support any actual deployments, not even the ones simulated by
serai-processor-docker-tests.

* Add alloy-simple-request-transport to the GH workflows

* cargo update

* Clarify a few comments and make one check more robust

* Use a string for 27.0 in .github

* Remove optional from no-longer-optional dependencies in processor

* Add alloy to git deny exception

* Fix no longer optional specification in processor's binaries feature

* Use a version of foundry from 2024

* Correct fetching Bitcoin TXs in the processor docker tests

* Update rustls to resolve RUSTSEC warnings

* Use the monthly nightly foundry, not the deleted daily nightly
2024-04-21 06:02:12 -04:00
Luke Parker
43083dfd49 Remove redundant log from tendermint lib 2024-04-21 05:32:41 -04:00
Luke Parker
523d2ac911 Rewrite tendermint's message handling loop to much more clearly match the paper (#560)
* Rewrite tendermint's message handling loop to much more clearly match the paper

No longer checks relevant branches upon messages, yet all branches upon any
state change. This is slower, yet easier to review and likely without one or
two rare edge cases.

When reviewing, please see page 5 of https://arxiv.org/pdf/1807.04938.pdf.
Lines from the specified algorithm can be found in the code by searching for
"// L".

* Sane rebroadcasting of consensus messages

Instead of broadcasting the last n messages on the Tributary side of things, we
now have the machine rebroadcast the message tape for the current block.

* Only rebroadcast messages which didn't error in some way

* Only rebroadcast our own messages for tendermint
2024-04-21 05:30:31 -04:00
Luke Parker
fd4f247917 Correct log which didn't work as intended 2024-04-20 19:54:16 -04:00
Luke Parker
ac9e356af4 Correct log targets in tendermint-machine 2024-04-20 19:15:15 -04:00
Luke Parker
bba7d2a356 Better logs in tendermint-machine 2024-04-20 18:13:44 -04:00
Luke Parker
4c349ae605 Redo how tendermint-machine checks if messages were prior sent
Instead of saving, for every sent message, if it was sent or not, we track the
latest block/round participated in. These two keys are comprehensive to all
prior block/rounds. We then use three keys for the latest round's
proposal/prevote/precommit, enabling tracking current state as necessary to
prevent equivocations with just 5 keys.

The storage of the latest three messages also enables proper rebroadcasting of
the current round (not implemented in this commit).
2024-04-20 18:10:51 -04:00
Luke Parker
a4428761f7 Bitcoin 27.0 2024-04-19 08:00:17 -04:00
Luke Parker
940e9553fd Add missing crates to GH workflows 2024-04-19 06:12:33 -04:00
Luke Parker
593aefd229 Extend time in sync test 2024-04-18 02:51:38 -04:00
Luke Parker
5830c2463d fmt 2024-04-18 02:03:28 -04:00
Luke Parker
bcc88c3e86 Don't broadcast added blocks
Online validators should inherently have them. Offline validators will receive
from the sync protocol.

This does somewhat eliminate the class of nodes who would follow the blockchain
(without validating it), yet that's fine for the performance benefit.
2024-04-18 01:48:11 -04:00
Luke Parker
fea16df567 Only reply to heartbeats after a certain distance 2024-04-18 01:39:34 -04:00
Luke Parker
4960c3222e Ensure we don't reply to stale heartbeats 2024-04-18 01:24:38 -04:00
Luke Parker
6b4df4f2c0 Only have some nodes respond to latent heartbeats
Also only respond if they're more than 2 blocks behind to minimize redundant
sending of blocks.
2024-04-17 21:54:10 -04:00
Luke Parker
dac46c8d7d Correct comment in VS pallet 2024-04-12 20:38:31 -04:00
expiredhotdog
db2e8376df use multiscalar_mul for CLSAG (#553)
* use multiscalar_mul for CLSAG

* use multiscalar_mul for CLSAG signing

* use OnceLock for basepoint precomputation
2024-04-12 19:52:56 -04:00
Luke Parker
33dd412e67 Add bootnode code prior used in testnet-internal (#554)
* Add bootnode code prior used in testnet-internal

Also performs the devnet/testnet differentation done since the testnet branch.

* Fixes

* fmt
2024-04-12 00:38:40 -04:00
Luke Parker
fcad402186 cargo update
Resolves deny error caused by h2.
2024-04-10 06:34:01 -04:00
Boog900
ab4d79628d fix CLSAG verification.
We were not setting c1 to the last calculated c during verification, instead keeping it set to the one provided in the signature.
2024-04-10 05:59:06 -04:00
Luke Parker
93be7a3067 Latest hyper-rustls, remove async-recursion
I didn't remove async-recursion when I updated the repo to 1.77 as I forgot we
used it in the tests. I still had to add some Box::pins, which may have been a
valid option, on the prior Rust version, yet at least resolves everything now.

Also updates everything which doesn't introduce further depends.
2024-03-27 00:17:04 -04:00
noot
63521f6a96 implement Router.sol and associated functions (#92)
* start Router contract

* use calldata for function args

* var name changes

* start testing router contract

* test with and without abi.encode

* cleanup

* why tf isn't tests/utils working

* cleanup tests

* remove unused files

* wip

* fix router contract and tests, add set/update public keys funcs

* impl some Froms

* make execute non-reentrant

* cleanup

* update Router to use ReentrancyGuard

* update contract to use errors, use bitfield in Executed event, minor other fixes

* wip

* fix build issues from merge, tests ok

* Router.sol cleanup

* cleanup, uncomment stuff

* bump ethers.rs version to latest

* make contract functions take generic middleware

* update build script to assert no compiler errors

* hardcode pubkey parity into contract, update tests

* Polish coins/ethereum in various ways

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-03-24 09:00:54 -04:00
Luke Parker
3d855c75be Create group before adding to it 2024-03-24 00:18:40 -04:00
Luke Parker
07df9aa035 Ensure user is in a group 2024-03-24 00:03:32 -04:00
Luke Parker
bc44fbdbac Add TODO to coordinator P2P 2024-03-23 23:32:21 -04:00
Luke Parker
4cacce5e55 Perform key share amortization on-chain to avoid discrepancies 2024-03-23 23:32:14 -04:00
Luke Parker
7408e26781 Don't regenerate infrastructure keys
Enables running setup without invalidating the message queue
2024-03-23 23:32:04 -04:00
Luke Parker
1f92e1cbda Fixes for prior commit 2024-03-23 23:31:55 -04:00
Luke Parker
333a9571b8 Use volumes for message-queue/processors/coordinator/serai 2024-03-23 23:31:44 -04:00
Luke Parker
b7d49af1d5 Track total peer count in the coordinator 2024-03-23 18:02:48 -04:00
Luke Parker
5ea3b1bf97 Use " " instead of "" for the empty key so sh doesn't interpret it as falsy 2024-03-23 17:38:50 -04:00
Luke Parker
2a31d8552e Add empty string for the KEY to serai-client to use the default keystore 2024-03-23 16:48:12 -04:00
Luke Parker
bca3728a10 Randomly select an addr from the authority discovery 2024-03-23 00:09:23 -04:00
Luke Parker
4914420a37 Don't add as an explicit peer if already connected 2024-03-22 23:51:51 -04:00
Luke Parker
f11a08c436 Peer finding which won't get stuck on one specific network 2024-03-22 23:47:43 -04:00
Luke Parker
35b58a45bd Split peer finding into a dedicated task 2024-03-22 23:40:15 -04:00
Luke Parker
af9b1ad5f9 Initial pruning of backlogged consensus messages 2024-03-22 23:18:53 -04:00
Luke Parker
e5afcda76b Explicitly use "" for KEY within the tests
Causes the provided keystore to be used over our keystore.
2024-03-22 23:05:40 -04:00
j-berman
08c7c1b413 monero: reference updated PR in fee test comment 2024-03-22 22:29:55 -04:00
Luke Parker
bdf5a66e95 Correct Serai key provision 2024-03-22 17:11:58 -04:00
Luke Parker
e861859dec Update EpochDuration in runtime 2024-03-22 16:18:01 -04:00
Luke Parker
6658d95c85 Extend orchestration as actually needed for testnet
Contains various bug fixes.
2024-03-22 16:15:26 -04:00
Luke Parker
2f07d04d88 Extend timeout for rebroadcast of consensus messages in coordinator 2024-03-22 16:06:31 -04:00
Luke Parker
e0259f2fe5 Add TODO re: Monero 2024-03-22 16:06:04 -04:00
Luke Parker
fab7a0a7cb Use the deterministically built wasm
Has the Dockerfile output to a volume. Has the node use the wasm from the
volume, if it exists.
2024-03-22 02:19:09 -04:00
Luke Parker
84cee06ac1 Rust 1.77 2024-03-21 20:09:33 -04:00
Luke Parker
c706d8664a Use OptimisticTransactionDb
Exposes flush calls.

Adds safety, at the cost of a panic risk, as multiple TXNs simultaneously
writing to a key will now cause a panic. This should be fine and the safety is
appreciated.
2024-03-20 23:42:40 -04:00
Luke Parker
1f2b9376f9 zstd 0.13 2024-03-20 21:53:57 -04:00
Luke Parker
13b147cbf6 Reduce coordinator tests contention re: cosign messages 2024-03-20 08:23:23 -04:00
Luke Parker
4a6496a90b Add slightly nicer formatting re: Protocol Changes doc 2024-03-12 00:59:51 -04:00
Luke Parker
9662d94bf9 Document the signals pallet in the user-facing docs 2024-03-12 00:56:06 -04:00
Luke Parker
233164cefd Flesh out docs more 2024-03-11 23:51:44 -04:00
Luke Parker
442d8c02fc Add docs, correct URL 2024-03-11 20:00:01 -04:00
Luke Parker
d1be9eaa2d Change baseurl to /docs 2024-03-11 18:02:54 -04:00
Luke Parker
c32d3413ba Add just-the-docs based user-facing documentation 2024-03-11 17:55:27 -04:00
Luke Parker
a3a009a7e9 Move docs to spec 2024-03-11 17:55:05 -04:00
Luke Parker
0889627e60 Typo fix for prior commit 2024-03-11 02:20:51 -04:00
Luke Parker
ace41c79fd Tidy the BlockHasEvents cache 2024-03-11 01:44:00 -04:00
Luke Parker
f7d16b3fc5 Fix 0 - 1 which caused a panic 2024-03-09 05:37:41 -05:00
Luke Parker
157acc47ca More aggresive WAL parameters 2024-03-09 05:05:43 -05:00
Luke Parker
ae0ecf9efe Disable jemalloc for rocksdb 0.22 to fix windows builds 2024-03-09 04:26:24 -05:00
Luke Parker
6374d9987e Correct how we save the block to scan from 2024-03-09 03:48:44 -05:00
Luke Parker
c93f6bf901 Replace yield_now with sleep 100 to prevent hammering a task, despite still being over-eager 2024-03-09 03:34:31 -05:00
Luke Parker
61a81e53e1 Further optimize cosign DB 2024-03-09 03:31:06 -05:00
Luke Parker
68dc872b88 sync every txn 2024-03-09 03:18:52 -05:00
Luke Parker
89b237af7e Correct the return value of block_has_events 2024-03-09 02:44:04 -05:00
Luke Parker
2347bf5fd3 Bound cosign work and ensure it progress forward even when cosigns don't occur
Should resolve the DB load observed on testnet.
2024-03-09 02:20:23 -05:00
Luke Parker
97f433c694 Redo how WAL/logs are limited by the DB
Adds a patch to the latest rocksdb.
2024-03-09 02:20:14 -05:00
Luke Parker
10f5ec51ca Explicitly limit RocksDB logs 2024-03-08 09:19:34 -05:00
Luke Parker
454bebaa77 Have the TendermintMachine domain-separate by genesis
Enbables support for multiple machines over the same DB.
2024-03-08 01:22:02 -05:00
Luke Parker
0d569ff7a3 cargo update
Resolves the current deny warning.
2024-03-07 23:23:48 -05:00
Luke Parker
480acfd430 Fix machete 2024-03-07 23:00:17 -05:00
Luke Parker
e266bc2e32 Stop validators from equivocating on reboot
Part of https://github.com/serai-dex/serai/issues/345.

The lack of full DB persistence does mean enough nodes rebooting at the same
time may cause a halt. This will prevent slashes.
2024-03-07 22:56:35 -05:00
Luke Parker
6c8a0bfda6 Limit docker logs to 300MB per container 2024-03-06 21:49:55 -05:00
Luke Parker
06c23368f2 Mitigate https://github.com/serai-dex/serai/issues/539 by making keystore deterministic 2024-03-06 21:37:40 -05:00
Luke Parker
5629c94b8b Reconcile the two copies of scalar_vector.rs in monero-serai 2024-03-02 17:15:16 -05:00
876 changed files with 62403 additions and 33230 deletions

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: 24.0.1
default: "27.0"
runs:
using: "composite"
@@ -37,4 +37,4 @@ runs:
- name: Bitcoin Regtest Daemon
shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/coins/bitcoin/run.sh -daemon
run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -txindex -daemon

View File

@@ -42,8 +42,8 @@ runs:
shell: bash
run: |
cargo install svm-rs
svm install 0.8.16
svm use 0.8.16
svm install 0.8.26
svm use 0.8.26
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.1
default: v0.18.3.4
runs:
using: "composite"

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.1
default: v0.18.3.4
runs:
using: "composite"
@@ -43,4 +43,4 @@ runs:
- name: Monero Regtest Daemon
shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/coins/monero/run.sh --detach
run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/monero/run.sh --detach

View File

@@ -5,12 +5,12 @@ inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.3.1
default: v0.18.3.4
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"
required: false
default: 24.0.1
default: "27.1"
runs:
using: "composite"
@@ -19,9 +19,9 @@ runs:
uses: ./.github/actions/build-dependencies
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@cb603ca0abb544f301eaed59ac0baf579aa6aecf
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
with:
version: nightly-09fe3e041369a816365a020f715ad6f94dbce9f2
version: nightly-f625d0fa7c51e65b4bf1e8f7931cd1c6e2e285e9
cache: false
- name: Run a Monero Regtest Node

View File

@@ -1 +1 @@
nightly-2024-02-07
nightly-2024-07-01

View File

@@ -1,35 +0,0 @@
name: coins/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
workflow_dispatch:
jobs:
test-coins:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \
-p ethereum-serai \
-p monero-generators \
-p monero-serai

View File

@@ -27,5 +27,8 @@ jobs:
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p std-shims \
-p zalloc \
-p patchable-async-sleep \
-p serai-db \
-p serai-env
-p serai-env \
-p serai-task \
-p simple-request

View File

@@ -7,7 +7,7 @@ on:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "message-queue/**"
- "coordinator/**"
- "orchestration/**"
@@ -18,7 +18,7 @@ on:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "message-queue/**"
- "coordinator/**"
- "orchestration/**"
@@ -37,4 +37,4 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Run coordinator Docker tests
run: cd tests/coordinator && GITHUB_CI=true RUST_BACKTRACE=1 cargo test
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-coordinator-tests

View File

@@ -35,6 +35,10 @@ jobs:
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p generalized-bulletproofs \
-p generalized-bulletproofs-circuit-abstraction \
-p ec-divisors \
-p generalized-bulletproofs-ec-gadgets \
-p dkg \
-p modular-frost \
-p frost-schnorrkel

View File

@@ -19,4 +19,4 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Run Full Stack Docker tests
run: cd tests/full-stack && GITHUB_CI=true RUST_BACKTRACE=1 cargo test
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-full-stack-tests

View File

@@ -73,6 +73,15 @@ jobs:
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install foundry
uses: foundry-rs/foundry-toolchain@8f1998e9878d786675189ef566a2e4bf24869773
with:
version: nightly-41d4e5437107f6f42c7711123890147bc736a609
cache: false
- name: Run forge fmt
run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol")
machete:
runs-on: ubuntu-latest
steps:
@@ -81,3 +90,25 @@ jobs:
run: |
cargo install cargo-machete
cargo machete
slither:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Slither
run: |
python3 -m pip install solc-select
solc-select install 0.8.26
solc-select use 0.8.26
python3 -m pip install slither-analyzer
slither --include-paths ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol
cp networks/ethereum/schnorr/contracts/Schnorr.sol processor/ethereum/router/contracts/
cp processor/ethereum/erc20/contracts/IERC20.sol processor/ethereum/router/contracts/
cd processor/ethereum/router/contracts
slither Router.sol

View File

@@ -33,4 +33,4 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Run message-queue Docker tests
run: cd tests/message-queue && GITHUB_CI=true RUST_BACKTRACE=1 cargo test
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-message-queue-tests

View File

@@ -5,12 +5,12 @@ on:
branches:
- develop
paths:
- "coins/monero/**"
- "networks/monero/**"
- "processor/**"
pull_request:
paths:
- "coins/monero/**"
- "networks/monero/**"
- "processor/**"
workflow_dispatch:
@@ -26,7 +26,22 @@ jobs:
uses: ./.github/actions/test-dependencies
- name: Run Unit Tests Without Features
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-io --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-generators --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-primitives --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-mlsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-clsag --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-borromean --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-bulletproofs --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-address --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-seed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package polyseed --lib
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --lib
# Doesn't run unit tests with features as the tests workflow will
@@ -35,7 +50,7 @@ jobs:
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.2.0]
version: [v0.17.3.2, v0.18.3.4]
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
@@ -46,11 +61,17 @@ jobs:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
# Runs with the binaries feature so the binaries build
# https://github.com/rust-lang/cargo/issues/8396
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --features binaries --test '*'
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.2.0' }}
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'
if: ${{ matrix.version != 'v0.18.3.4' }}
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-simple-request-rpc --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet --all-features --test '*'
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-wallet-util --all-features --test '*'

259
.github/workflows/msrv.yml vendored Normal file
View File

@@ -0,0 +1,259 @@
name: Weekly MSRV Check
on:
schedule:
- cron: "0 0 * * 0"
workflow_dispatch:
jobs:
msrv-common:
name: Run cargo msrv on common
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on common
run: |
cargo msrv verify --manifest-path common/zalloc/Cargo.toml
cargo msrv verify --manifest-path common/std-shims/Cargo.toml
cargo msrv verify --manifest-path common/env/Cargo.toml
cargo msrv verify --manifest-path common/db/Cargo.toml
cargo msrv verify --manifest-path common/task/Cargo.toml
cargo msrv verify --manifest-path common/request/Cargo.toml
cargo msrv verify --manifest-path common/patchable-async-sleep/Cargo.toml
msrv-crypto:
name: Run cargo msrv on crypto
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on crypto
run: |
cargo msrv verify --manifest-path crypto/transcript/Cargo.toml
cargo msrv verify --manifest-path crypto/ff-group-tests/Cargo.toml
cargo msrv verify --manifest-path crypto/dalek-ff-group/Cargo.toml
cargo msrv verify --manifest-path crypto/ed448/Cargo.toml
cargo msrv verify --manifest-path crypto/multiexp/Cargo.toml
cargo msrv verify --manifest-path crypto/dleq/Cargo.toml
cargo msrv verify --manifest-path crypto/ciphersuite/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorr/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/generalized-bulletproofs/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/circuit-abstraction/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/divisors/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/ec-gadgets/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/embedwards25519/Cargo.toml
cargo msrv verify --manifest-path crypto/evrf/secq256k1/Cargo.toml
cargo msrv verify --manifest-path crypto/dkg/Cargo.toml
cargo msrv verify --manifest-path crypto/frost/Cargo.toml
cargo msrv verify --manifest-path crypto/schnorrkel/Cargo.toml
msrv-networks:
name: Run cargo msrv on networks
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on networks
run: |
cargo msrv verify --manifest-path networks/bitcoin/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/build-contracts/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/schnorr/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/alloy-simple-request-transport/Cargo.toml
cargo msrv verify --manifest-path networks/ethereum/relayer/Cargo.toml --features parity-db
cargo msrv verify --manifest-path networks/monero/io/Cargo.toml
cargo msrv verify --manifest-path networks/monero/generators/Cargo.toml
cargo msrv verify --manifest-path networks/monero/primitives/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/mlsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/clsag/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/borromean/Cargo.toml
cargo msrv verify --manifest-path networks/monero/ringct/bulletproofs/Cargo.toml
cargo msrv verify --manifest-path networks/monero/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/Cargo.toml
cargo msrv verify --manifest-path networks/monero/rpc/simple-request/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/address/Cargo.toml
cargo msrv verify --manifest-path networks/monero/wallet/Cargo.toml
cargo msrv verify --manifest-path networks/monero/verify-chain/Cargo.toml
msrv-message-queue:
name: Run cargo msrv on message-queue
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path message-queue/Cargo.toml --features parity-db
msrv-processor:
name: Run cargo msrv on processor
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on processor
run: |
cargo msrv verify --manifest-path processor/view-keys/Cargo.toml
cargo msrv verify --manifest-path processor/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/messages/Cargo.toml
cargo msrv verify --manifest-path processor/scanner/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/smart-contract/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/standard/Cargo.toml
cargo msrv verify --manifest-path processor/scheduler/utxo/transaction-chaining/Cargo.toml
cargo msrv verify --manifest-path processor/key-gen/Cargo.toml
cargo msrv verify --manifest-path processor/frost-attempt-manager/Cargo.toml
cargo msrv verify --manifest-path processor/signers/Cargo.toml
cargo msrv verify --manifest-path processor/bin/Cargo.toml --features parity-db
cargo msrv verify --manifest-path processor/bitcoin/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/test-primitives/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/erc20/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/deployer/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/router/Cargo.toml
cargo msrv verify --manifest-path processor/ethereum/Cargo.toml
cargo msrv verify --manifest-path processor/monero/Cargo.toml
msrv-coordinator:
name: Run cargo msrv on coordinator
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on coordinator
run: |
cargo msrv verify --manifest-path coordinator/tributary-sdk/tendermint/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary-sdk/Cargo.toml
cargo msrv verify --manifest-path coordinator/cosign/Cargo.toml
cargo msrv verify --manifest-path coordinator/substrate/Cargo.toml
cargo msrv verify --manifest-path coordinator/tributary/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/p2p/libp2p/Cargo.toml
cargo msrv verify --manifest-path coordinator/Cargo.toml
msrv-substrate:
name: Run cargo msrv on substrate
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on substrate
run: |
cargo msrv verify --manifest-path substrate/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/coins/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/dex/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/economic-security/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/genesis-liquidity/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/in-instructions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/validator-sets/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/emissions/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/primitives/Cargo.toml
cargo msrv verify --manifest-path substrate/signals/pallet/Cargo.toml
cargo msrv verify --manifest-path substrate/abi/Cargo.toml
cargo msrv verify --manifest-path substrate/client/Cargo.toml
cargo msrv verify --manifest-path substrate/runtime/Cargo.toml
cargo msrv verify --manifest-path substrate/node/Cargo.toml
msrv-orchestration:
name: Run cargo msrv on orchestration
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on message-queue
run: |
cargo msrv verify --manifest-path orchestration/Cargo.toml
msrv-mini:
name: Run cargo msrv on mini
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install cargo msrv
run: cargo install --locked cargo-msrv
- name: Run cargo msrv on mini
run: |
cargo msrv verify --manifest-path mini/Cargo.toml

52
.github/workflows/networks-tests.yml vendored Normal file
View File

@@ -0,0 +1,52 @@
name: networks/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "networks/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "networks/**"
workflow_dispatch:
jobs:
test-networks:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \
-p build-solidity-contracts \
-p ethereum-schnorr-contract \
-p alloy-simple-request-transport \
-p serai-ethereum-relayer \
-p monero-io \
-p monero-generators \
-p monero-primitives \
-p monero-mlsag \
-p monero-clsag \
-p monero-borromean \
-p monero-bulletproofs \
-p monero-serai \
-p monero-rpc \
-p monero-simple-request-rpc \
-p monero-address \
-p monero-wallet \
-p monero-seed \
-p polyseed \
-p monero-wallet-util \
-p monero-serai-verify-chain

View File

@@ -7,14 +7,14 @@ on:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "tests/no-std/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "tests/no-std/**"
workflow_dispatch:
@@ -32,4 +32,4 @@ jobs:
run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf
- name: Verify no-std builds
run: cd tests/no-std && CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf
run: CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf -p serai-no-std-tests

90
.github/workflows/pages.yml vendored Normal file
View File

@@ -0,0 +1,90 @@
# MIT License
#
# Copyright (c) 2022 just-the-docs
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll site to Pages
on:
push:
branches:
- "develop"
paths:
- "docs/**"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
jobs:
# Build job
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
bundler-cache: true
cache-version: 0
working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages
id: pages
uses: actions/configure-pages@v3
- name: Build with Jekyll
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
JEKYLL_ENV: production
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: "docs/_site/"
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2

View File

@@ -7,7 +7,7 @@ on:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "orchestration/**"
@@ -18,7 +18,7 @@ on:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "orchestration/**"
@@ -37,4 +37,4 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Run processor Docker tests
run: cd tests/processor && GITHUB_CI=true RUST_BACKTRACE=1 cargo test
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-processor-tests

View File

@@ -33,4 +33,4 @@ jobs:
uses: ./.github/actions/build-dependencies
- name: Run Reproducible Runtime tests
run: cd tests/reproducible-runtime && GITHUB_CI=true RUST_BACKTRACE=1 cargo test
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests

View File

@@ -7,7 +7,7 @@ on:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
@@ -17,7 +17,7 @@ on:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "networks/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
@@ -39,10 +39,35 @@ jobs:
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-message-queue \
-p serai-processor-messages \
-p serai-processor \
-p serai-processor-key-gen \
-p serai-processor-view-keys \
-p serai-processor-frost-attempt-manager \
-p serai-processor-primitives \
-p serai-processor-scanner \
-p serai-processor-scheduler-primitives \
-p serai-processor-utxo-scheduler-primitives \
-p serai-processor-utxo-scheduler \
-p serai-processor-transaction-chaining-scheduler \
-p serai-processor-smart-contract-scheduler \
-p serai-processor-signers \
-p serai-processor-bin \
-p serai-bitcoin-processor \
-p serai-processor-ethereum-primitives \
-p serai-processor-ethereum-test-primitives \
-p serai-processor-ethereum-deployer \
-p serai-processor-ethereum-router \
-p serai-processor-ethereum-erc20 \
-p serai-ethereum-processor \
-p serai-monero-processor \
-p tendermint-machine \
-p tributary-chain \
-p tributary-sdk \
-p serai-cosign \
-p serai-coordinator-substrate \
-p serai-coordinator-tributary \
-p serai-coordinator-p2p \
-p serai-coordinator-libp2p-p2p \
-p serai-coordinator \
-p serai-orchestrator \
-p serai-docker-tests
test-substrate:
@@ -62,9 +87,16 @@ jobs:
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-genesis-liquidity-primitives \
-p serai-genesis-liquidity-pallet \
-p serai-emissions-primitives \
-p serai-emissions-pallet \
-p serai-economic-security-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-primitives \
-p serai-signals-pallet \
-p serai-abi \
-p serai-runtime \
-p serai-node

4649
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -2,8 +2,10 @@
resolver = "2"
members = [
# Version patches
"patches/parking_lot_core",
"patches/parking_lot",
"patches/zstd",
"patches/proc-macro-crate",
"patches/rocksdb",
# std patches
"patches/matches",
@@ -15,8 +17,10 @@ members = [
"common/std-shims",
"common/zalloc",
"common/patchable-async-sleep",
"common/db",
"common/env",
"common/task",
"common/request",
"crypto/transcript",
@@ -27,25 +31,78 @@ members = [
"crypto/ciphersuite",
"crypto/multiexp",
"crypto/schnorr",
"crypto/dleq",
"crypto/evrf/secq256k1",
"crypto/evrf/embedwards25519",
"crypto/evrf/generalized-bulletproofs",
"crypto/evrf/circuit-abstraction",
"crypto/evrf/divisors",
"crypto/evrf/ec-gadgets",
"crypto/dkg",
"crypto/frost",
"crypto/schnorrkel",
"coins/bitcoin",
"coins/ethereum",
"coins/monero/generators",
"coins/monero",
"networks/bitcoin",
"networks/ethereum/build-contracts",
"networks/ethereum/schnorr",
"networks/ethereum/alloy-simple-request-transport",
"networks/ethereum/relayer",
"networks/monero/io",
"networks/monero/generators",
"networks/monero/primitives",
"networks/monero/ringct/mlsag",
"networks/monero/ringct/clsag",
"networks/monero/ringct/borromean",
"networks/monero/ringct/bulletproofs",
"networks/monero",
"networks/monero/rpc",
"networks/monero/rpc/simple-request",
"networks/monero/wallet/address",
"networks/monero/wallet",
"networks/monero/wallet/seed",
"networks/monero/wallet/polyseed",
"networks/monero/wallet/util",
"networks/monero/verify-chain",
"message-queue",
"processor/messages",
"processor",
"coordinator/tributary/tendermint",
"processor/key-gen",
"processor/view-keys",
"processor/frost-attempt-manager",
"processor/primitives",
"processor/scanner",
"processor/scheduler/primitives",
"processor/scheduler/utxo/primitives",
"processor/scheduler/utxo/standard",
"processor/scheduler/utxo/transaction-chaining",
"processor/scheduler/smart-contract",
"processor/signers",
"processor/bin",
"processor/bitcoin",
"processor/ethereum/primitives",
"processor/ethereum/test-primitives",
"processor/ethereum/deployer",
"processor/ethereum/router",
"processor/ethereum/erc20",
"processor/ethereum",
"processor/monero",
"coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk",
"coordinator/cosign",
"coordinator/substrate",
"coordinator/tributary",
"coordinator/p2p",
"coordinator/p2p/libp2p",
"coordinator",
"substrate/primitives",
@@ -53,12 +110,22 @@ members = [
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/dex/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/genesis-liquidity/primitives",
"substrate/genesis-liquidity/pallet",
"substrate/emissions/primitives",
"substrate/emissions/pallet",
"substrate/economic-security/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
@@ -87,18 +154,32 @@ members = [
# to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
ff = { opt-level = 3 }
group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 }
secp256k1 = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 }
monero-serai = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
ec-divisors = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
dkg = { opt-level = 3 }
monero-generators = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
[profile.release]
panic = "unwind"
@@ -107,13 +188,12 @@ panic = "unwind"
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# Needed due to dockertest's usage of `Rc`s when we need `Arc`s
dockertest = { git = "https://github.com/kayabaNerve/dockertest-rs", branch = "arc" }
parking_lot_core = { path = "patches/parking_lot_core" }
parking_lot = { path = "patches/parking_lot" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# proc-macro-crate 2 binds to an old version of toml for msrv so we patch to 3
proc-macro-crate = { path = "patches/proc-macro-crate" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
@@ -128,8 +208,12 @@ matches = { path = "patches/matches" }
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
# The official pasta_curves repo doesn't support Zeroize
pasta_curves = { git = "https://github.com/kayabaNerve/pasta_curves", rev = "a46b5be95cacbff54d06aad8d3bbcba42e05d616" }
[workspace.lints.clippy]
unwrap_or_default = "allow"
map_unwrap_or = "allow"
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"
@@ -157,7 +241,6 @@ manual_instant_elapsed = "deny"
manual_let_else = "deny"
manual_ok_or = "deny"
manual_string_new = "deny"
map_unwrap_or = "deny"
match_bool = "deny"
match_same_arms = "deny"
missing_fields_in_debug = "deny"
@@ -169,6 +252,7 @@ range_plus_one = "deny"
redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
string_slice = "deny"
unchecked_duration_subtraction = "deny"
uninlined_format_args = "deny"
unnecessary_box_returns = "deny"

View File

@@ -5,13 +5,16 @@ Bitcoin, Ethereum, DAI, and Monero, offering a liquidity-pool-based trading
experience. Funds are stored in an economically secured threshold-multisig
wallet.
[Getting Started](docs/Getting%20Started.md)
[Getting Started](spec/Getting%20Started.md)
### Layout
- `audits`: Audits for various parts of Serai.
- `docs`: Documentation on the Serai protocol.
- `spec`: The specification of the Serai protocol, both internally and as
networked.
- `docs`: User-facing documentation on the Serai protocol.
- `common`: Crates containing utilities common to a variety of areas under
Serai, none neatly fitting under another category.
@@ -21,7 +24,7 @@ wallet.
infrastructure, to our IETF-compliant FROST implementation, to a DLEq proof as
needed for Bitcoin-Monero atomic swaps.
- `coins`: Various coin libraries intended for usage in Serai yet also by the
- `networks`: Various libraries intended for usage in Serai yet also by the
wider community. This means they will always support the functionality Serai
needs, yet won't disadvantage other use cases when possible.

View File

@@ -1,6 +0,0 @@
# Cypher Stack /coins/bitcoin Audit, August 2023
This audit was over the /coins/bitcoin folder. It is encompassing up to commit
5121ca75199dff7bd34230880a1fdd793012068c.
Please see https://github.com/cypherstack/serai-btc-audit for provenance.

View File

@@ -0,0 +1,7 @@
# Cypher Stack /networks/bitcoin Audit, August 2023
This audit was over the `/networks/bitcoin` folder (at the time located at
`/coins/bitcoin`). It is encompassing up to commit
5121ca75199dff7bd34230880a1fdd793012068c.
Please see https://github.com/cypherstack/serai-btc-audit for provenance.

View File

@@ -1,3 +0,0 @@
# solidity build outputs
cache
artifacts

View File

@@ -1,42 +0,0 @@
[package]
name = "ethereum-serai"
version = "0.1.0"
description = "An Ethereum library supporting Schnorr signing and on-chain verification"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/ethereum"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Elizabeth Binks <elizabethjbinks@gmail.com>"]
edition = "2021"
publish = false
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "1", default-features = false }
eyre = { version = "0.6", default-features = false }
sha3 = { version = "0.10", default-features = false, features = ["std"] }
group = { version = "0.13", default-features = false }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["secp256k1", "tests"] }
ethers-core = { version = "2", default-features = false }
ethers-providers = { version = "2", default-features = false }
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
serde = { version = "1", default-features = false, features = ["std"] }
serde_json = { version = "1", default-features = false, features = ["std"] }
sha2 = { version = "0.10", default-features = false, features = ["std"] }
tokio = { version = "1", features = ["macros"] }

View File

@@ -1,9 +0,0 @@
# Ethereum
This package contains Ethereum-related functionality, specifically deploying and
interacting with Serai contracts.
### Dependencies
- solc
- [Foundry](https://github.com/foundry-rs/foundry)

View File

@@ -1,15 +0,0 @@
fn main() {
println!("cargo:rerun-if-changed=contracts");
println!("cargo:rerun-if-changed=artifacts");
#[rustfmt::skip]
let args = [
"--base-path", ".",
"-o", "./artifacts", "--overwrite",
"--bin", "--abi",
"--optimize",
"./contracts/Schnorr.sol"
];
assert!(std::process::Command::new("solc").args(args).status().unwrap().success());
}

View File

@@ -1,36 +0,0 @@
//SPDX-License-Identifier: AGPLv3
pragma solidity ^0.8.0;
// see https://github.com/noot/schnorr-verify for implementation details
contract Schnorr {
// secp256k1 group order
uint256 constant public Q =
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141;
// parity := public key y-coord parity (27 or 28)
// px := public key x-coord
// message := 32-byte message
// s := schnorr signature
// e := schnorr signature challenge
function verify(
uint8 parity,
bytes32 px,
bytes32 message,
bytes32 s,
bytes32 e
) public view returns (bool) {
// ecrecover = (m, v, r, s);
bytes32 sp = bytes32(Q - mulmod(uint256(s), uint256(px), Q));
bytes32 ep = bytes32(Q - mulmod(uint256(e), uint256(px), Q));
require(sp != 0);
// the ecrecover precompile implementation checks that the `r` and `s`
// inputs are non-zero (in this case, `px` and `ep`), thus we don't need to
// check if they're zero.will make me
address R = ecrecover(sp, parity, px, ep);
require(R != address(0), "ecrecover failed");
return e == keccak256(
abi.encodePacked(R, uint8(parity), px, block.chainid, message)
);
}
}

View File

@@ -1,36 +0,0 @@
use thiserror::Error;
use eyre::{eyre, Result};
use ethers_providers::{Provider, Http};
use ethers_contract::abigen;
use crate::crypto::ProcessedSignature;
#[derive(Error, Debug)]
pub enum EthereumError {
#[error("failed to verify Schnorr signature")]
VerificationError,
}
abigen!(Schnorr, "./artifacts/Schnorr.abi");
pub async fn call_verify(
contract: &Schnorr<Provider<Http>>,
params: &ProcessedSignature,
) -> Result<()> {
if contract
.verify(
params.parity + 27,
params.px.to_bytes().into(),
params.message,
params.s.to_bytes().into(),
params.e.to_bytes().into(),
)
.call()
.await?
{
Ok(())
} else {
Err(eyre!(EthereumError::VerificationError))
}
}

View File

@@ -1,107 +0,0 @@
use sha3::{Digest, Keccak256};
use group::Group;
use k256::{
elliptic_curve::{
bigint::ArrayEncoding, ops::Reduce, point::DecompressPoint, sec1::ToEncodedPoint,
},
AffinePoint, ProjectivePoint, Scalar, U256,
};
use frost::{algorithm::Hram, curve::Secp256k1};
pub fn keccak256(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar::reduce(U256::from_be_slice(&keccak256(data)))
}
pub fn address(point: &ProjectivePoint) -> [u8; 20] {
let encoded_point = point.to_encoded_point(false);
keccak256(&encoded_point.as_ref()[1 .. 65])[12 .. 32].try_into().unwrap()
}
pub fn ecrecover(message: Scalar, v: u8, r: Scalar, s: Scalar) -> Option<[u8; 20]> {
if r.is_zero().into() || s.is_zero().into() {
return None;
}
#[allow(non_snake_case)]
let R = AffinePoint::decompress(&r.to_bytes(), v.into());
#[allow(non_snake_case)]
if let Some(R) = Option::<AffinePoint>::from(R) {
#[allow(non_snake_case)]
let R = ProjectivePoint::from(R);
let r = r.invert().unwrap();
let u1 = ProjectivePoint::GENERATOR * (-message * r);
let u2 = R * (s * r);
let key: ProjectivePoint = u1 + u2;
if !bool::from(key.is_identity()) {
return Some(address(&key));
}
}
None
}
#[derive(Clone, Default)]
pub struct EthereumHram {}
impl Hram<Secp256k1> for EthereumHram {
#[allow(non_snake_case)]
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
let a_encoded_point = A.to_encoded_point(true);
let mut a_encoded = a_encoded_point.as_ref().to_owned();
a_encoded[0] += 25; // Ethereum uses 27/28 for point parity
let mut data = address(R).to_vec();
data.append(&mut a_encoded);
data.append(&mut m.to_vec());
Scalar::reduce(U256::from_be_slice(&keccak256(&data)))
}
}
pub struct ProcessedSignature {
pub s: Scalar,
pub px: Scalar,
pub parity: u8,
pub message: [u8; 32],
pub e: Scalar,
}
#[allow(non_snake_case)]
pub fn preprocess_signature_for_ecrecover(
m: [u8; 32],
R: &ProjectivePoint,
s: Scalar,
A: &ProjectivePoint,
chain_id: U256,
) -> (Scalar, Scalar) {
let processed_sig = process_signature_for_contract(m, R, s, A, chain_id);
let sr = processed_sig.s.mul(&processed_sig.px).negate();
let er = processed_sig.e.mul(&processed_sig.px).negate();
(sr, er)
}
#[allow(non_snake_case)]
pub fn process_signature_for_contract(
m: [u8; 32],
R: &ProjectivePoint,
s: Scalar,
A: &ProjectivePoint,
chain_id: U256,
) -> ProcessedSignature {
let encoded_pk = A.to_encoded_point(true);
let px = &encoded_pk.as_ref()[1 .. 33];
let px_scalar = Scalar::reduce(U256::from_be_slice(px));
let e = EthereumHram::hram(R, A, &[chain_id.to_be_byte_array().as_slice(), &m].concat());
ProcessedSignature {
s,
px: px_scalar,
parity: &encoded_pk.as_ref()[0] - 2,
#[allow(non_snake_case)]
message: m,
e,
}
}

View File

@@ -1,2 +0,0 @@
pub mod contract;
pub mod crypto;

View File

@@ -1,128 +0,0 @@
use std::{convert::TryFrom, sync::Arc, time::Duration, fs::File};
use rand_core::OsRng;
use ::k256::{
elliptic_curve::{bigint::ArrayEncoding, PrimeField},
U256,
};
use ethers_core::{
types::Signature,
abi::Abi,
utils::{keccak256, Anvil, AnvilInstance},
};
use ethers_contract::ContractFactory;
use ethers_providers::{Middleware, Provider, Http};
use frost::{
curve::Secp256k1,
Participant,
algorithm::IetfSchnorr,
tests::{key_gen, algorithm_machines, sign},
};
use ethereum_serai::{
crypto,
contract::{Schnorr, call_verify},
};
// TODO: Replace with a contract deployment from an unknown account, so the environment solely has
// to fund the deployer, not create/pass a wallet
pub async fn deploy_schnorr_verifier_contract(
chain_id: u32,
client: Arc<Provider<Http>>,
wallet: &k256::ecdsa::SigningKey,
) -> eyre::Result<Schnorr<Provider<Http>>> {
let abi: Abi = serde_json::from_reader(File::open("./artifacts/Schnorr.abi").unwrap()).unwrap();
let hex_bin_buf = std::fs::read_to_string("./artifacts/Schnorr.bin").unwrap();
let hex_bin =
if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf };
let bin = hex::decode(hex_bin).unwrap();
let factory = ContractFactory::new(abi, bin.into(), client.clone());
let mut deployment_tx = factory.deploy(())?.tx;
deployment_tx.set_chain_id(chain_id);
deployment_tx.set_gas(500_000);
let (max_fee_per_gas, max_priority_fee_per_gas) = client.estimate_eip1559_fees(None).await?;
deployment_tx.as_eip1559_mut().unwrap().max_fee_per_gas = Some(max_fee_per_gas);
deployment_tx.as_eip1559_mut().unwrap().max_priority_fee_per_gas = Some(max_priority_fee_per_gas);
let sig_hash = deployment_tx.sighash();
let (sig, rid) = wallet.sign_prehash_recoverable(sig_hash.as_ref()).unwrap();
// EIP-155 v
let mut v = u64::from(rid.to_byte());
assert!((v == 0) || (v == 1));
v += u64::from((chain_id * 2) + 35);
let r = sig.r().to_repr();
let r_ref: &[u8] = r.as_ref();
let s = sig.s().to_repr();
let s_ref: &[u8] = s.as_ref();
let deployment_tx = deployment_tx.rlp_signed(&Signature { r: r_ref.into(), s: s_ref.into(), v });
let pending_tx = client.send_raw_transaction(deployment_tx).await?;
let mut receipt;
while {
receipt = client.get_transaction_receipt(pending_tx.tx_hash()).await?;
receipt.is_none()
} {
tokio::time::sleep(Duration::from_secs(6)).await;
}
let receipt = receipt.unwrap();
assert!(receipt.status == Some(1.into()));
let contract = Schnorr::new(receipt.contract_address.unwrap(), client.clone());
Ok(contract)
}
async fn deploy_test_contract() -> (u32, AnvilInstance, Schnorr<Provider<Http>>) {
let anvil = Anvil::new().spawn();
let provider =
Provider::<Http>::try_from(anvil.endpoint()).unwrap().interval(Duration::from_millis(10u64));
let chain_id = provider.get_chainid().await.unwrap().as_u32();
let wallet = anvil.keys()[0].clone().into();
let client = Arc::new(provider);
(chain_id, anvil, deploy_schnorr_verifier_contract(chain_id, client, &wallet).await.unwrap())
}
#[tokio::test]
async fn test_deploy_contract() {
deploy_test_contract().await;
}
#[tokio::test]
async fn test_ecrecover_hack() {
let (chain_id, _anvil, contract) = deploy_test_contract().await;
let chain_id = U256::from(chain_id);
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, crypto::EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, &keys),
full_message,
);
let mut processed_sig =
crypto::process_signature_for_contract(hashed_message, &sig.R, sig.s, &group_key, chain_id);
call_verify(&contract, &processed_sig).await.unwrap();
// test invalid signature fails
processed_sig.message[0] = 0;
assert!(call_verify(&contract, &processed_sig).await.is_err());
}

View File

@@ -1,87 +0,0 @@
use k256::{
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, sec1::ToEncodedPoint},
ProjectivePoint, Scalar, U256,
};
use frost::{curve::Secp256k1, Participant};
use ethereum_serai::crypto::*;
#[test]
fn test_ecrecover() {
use rand_core::OsRng;
use sha2::Sha256;
use sha3::{Digest, Keccak256};
use k256::ecdsa::{hazmat::SignPrimitive, signature::DigestVerifier, SigningKey, VerifyingKey};
let private = SigningKey::random(&mut OsRng);
let public = VerifyingKey::from(&private);
const MESSAGE: &[u8] = b"Hello, World!";
let (sig, recovery_id) = private
.as_nonzero_scalar()
.try_sign_prehashed_rfc6979::<Sha256>(&Keccak256::digest(MESSAGE), b"")
.unwrap();
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
{
assert_eq!(public.verify_digest(Keccak256::new_with_prefix(MESSAGE), &sig).unwrap(), ());
}
assert_eq!(
ecrecover(hash_to_scalar(MESSAGE), recovery_id.unwrap().is_y_odd().into(), *sig.r(), *sig.s())
.unwrap(),
address(&ProjectivePoint::from(public.as_affine()))
);
}
#[test]
fn test_signing() {
use frost::{
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let _group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let _sig =
sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE);
}
#[test]
fn test_ecrecover_hack() {
use frost::{
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&Participant::new(1).unwrap()].group_key();
let group_key_encoded = group_key.to_encoded_point(true);
let group_key_compressed = group_key_encoded.as_ref();
let group_key_x = Scalar::reduce(U256::from_be_slice(&group_key_compressed[1 .. 33]));
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let chain_id = U256::ONE;
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, &keys),
full_message,
);
let (sr, er) =
preprocess_signature_for_ecrecover(hashed_message, &sig.R, sig.s, &group_key, chain_id);
let q = ecrecover(sr, group_key_compressed[0] - 2, group_key_x, er).unwrap();
assert_eq!(q, address(&sig.R));
}

View File

@@ -1,2 +0,0 @@
mod contract;
mod crypto;

View File

@@ -1,113 +0,0 @@
[package]
name = "monero-serai"
version = "0.1.4-alpha"
description = "A modern Monero transaction library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
std-shims = { path = "../../common/std-shims", version = "^0.1.1", default-features = false }
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", default-features = false, optional = true }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
subtle = { version = "^2.4", default-features = false }
rand_core = { version = "0.6", default-features = false }
# Used to send transactions
rand = { version = "0.8", default-features = false }
rand_chacha = { version = "0.3", default-features = false }
# Used to select decoys
rand_distr = { version = "0.4", default-features = false }
sha3 = { version = "0.10", default-features = false }
pbkdf2 = { version = "0.12", features = ["simple"], default-features = false }
curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize", "precomputed-tables"] }
# Used for the hash to curve, along with the more complicated proofs
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
multiexp = { path = "../../crypto/multiexp", version = "0.4", default-features = false, features = ["batch"] }
# Needed for multisig
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.4", default-features = false, features = ["serialize"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.8", default-features = false, features = ["ed25519"], optional = true }
monero-generators = { path = "generators", version = "0.4", default-features = false }
async-lock = { version = "3", default-features = false, optional = true }
hex-literal = "0.4"
hex = { version = "0.4", default-features = false, features = ["alloc"] }
serde = { version = "1", default-features = false, features = ["derive", "alloc"] }
serde_json = { version = "1", default-features = false, features = ["alloc"] }
base58-monero = { version = "2", default-features = false, features = ["check"] }
# Used for the provided HTTP RPC
digest_auth = { version = "0.3", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.1", default-features = false, features = ["tls"], optional = true }
tokio = { version = "1", default-features = false, optional = true }
[build-dependencies]
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
monero-generators = { path = "generators", version = "0.4", default-features = false }
[dev-dependencies]
tokio = { version = "1", features = ["sync", "macros"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["tests"] }
[features]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"subtle/std",
"rand_core/std",
"rand/std",
"rand_chacha/std",
"rand_distr/std",
"sha3/std",
"pbkdf2/std",
"multiexp/std",
"transcript/std",
"dleq/std",
"monero-generators/std",
"async-lock?/std",
"hex/std",
"serde/std",
"serde_json/std",
"base58-monero/std",
]
cache-distribution = ["async-lock"]
http-rpc = ["digest_auth", "simple-request", "tokio"]
multisig = ["transcript", "frost", "dleq", "std"]
binaries = ["tokio/rt-multi-thread", "tokio/macros", "http-rpc"]
experimental = []
default = ["std", "http-rpc"]

View File

@@ -1,49 +0,0 @@
# monero-serai
A modern Monero transaction library intended for usage in wallets. It prides
itself on accuracy, correctness, and removing common pit falls developers may
face.
monero-serai also offers the following features:
- Featured Addresses
- A FROST-based multisig orders of magnitude more performant than Monero's
### Purpose and support
monero-serai was written for Serai, a decentralized exchange aiming to support
Monero. Despite this, monero-serai is intended to be a widely usable library,
accurate to Monero. monero-serai guarantees the functionality needed for Serai,
yet will not deprive functionality from other users.
Various legacy transaction formats are not currently implemented, yet we are
willing to add support for them. There aren't active development efforts around
them however.
### Caveats
This library DOES attempt to do the following:
- Create on-chain transactions identical to how wallet2 would (unless told not
to)
- Not be detectable as monero-serai when scanning outputs
- Not reveal spent outputs to the connected RPC node
This library DOES NOT attempt to do the following:
- Have identical RPC behavior when creating transactions
- Be a wallet
This means that monero-serai shouldn't be fingerprintable on-chain. It also
shouldn't be fingerprintable if a targeted attack occurs to detect if the
receiving wallet is monero-serai or wallet2. It also should be generally safe
for usage with remote nodes.
It won't hide from remote nodes it's monero-serai however, potentially
allowing a remote node to profile you. The implications of this are left to the
user to consider.
It also won't act as a wallet, just as a transaction library. wallet2 has
several *non-transaction-level* policies, such as always attempting to use two
inputs to create transactions. These are considered out of scope to
monero-serai.

View File

@@ -1,67 +0,0 @@
use std::{
io::Write,
env,
path::Path,
fs::{File, remove_file},
};
use dalek_ff_group::EdwardsPoint;
use monero_generators::bulletproofs_generators;
fn serialize(generators_string: &mut String, points: &[EdwardsPoint]) {
for generator in points {
generators_string.extend(
format!(
"
dalek_ff_group::EdwardsPoint(
curve25519_dalek::edwards::CompressedEdwardsY({:?}).decompress().unwrap()
),
",
generator.compress().to_bytes()
)
.chars(),
);
}
}
fn generators(prefix: &'static str, path: &str) {
let generators = bulletproofs_generators(prefix.as_bytes());
#[allow(non_snake_case)]
let mut G_str = String::new();
serialize(&mut G_str, &generators.G);
#[allow(non_snake_case)]
let mut H_str = String::new();
serialize(&mut H_str, &generators.H);
let path = Path::new(&env::var("OUT_DIR").unwrap()).join(path);
let _ = remove_file(&path);
File::create(&path)
.unwrap()
.write_all(
format!(
"
pub(crate) static GENERATORS_CELL: OnceLock<Generators> = OnceLock::new();
pub fn GENERATORS() -> &'static Generators {{
GENERATORS_CELL.get_or_init(|| Generators {{
G: vec![
{G_str}
],
H: vec![
{H_str}
],
}})
}}
",
)
.as_bytes(),
)
.unwrap();
}
fn main() {
println!("cargo:rerun-if-changed=build.rs");
generators("bulletproof", "generators.rs");
generators("bulletproof_plus", "generators_plus.rs");
}

View File

@@ -1,7 +0,0 @@
# Monero Generators
Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
`hash_to_point` here, is included, as needed to generate generators.
This library is usable under no-std when the `std` feature is disabled.

View File

@@ -1,79 +0,0 @@
//! Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
//!
//! An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
//! `hash_to_point` here, is included, as needed to generate generators.
#![cfg_attr(not(feature = "std"), no_std)]
use std_shims::{sync::OnceLock, vec::Vec};
use sha3::{Digest, Keccak256};
use curve25519_dalek::edwards::{EdwardsPoint as DalekPoint};
use group::{Group, GroupEncoding};
use dalek_ff_group::EdwardsPoint;
mod varint;
use varint::write_varint;
mod hash_to_point;
pub use hash_to_point::{hash_to_point, decompress_point};
#[cfg(test)]
mod tests;
fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
static H_CELL: OnceLock<DalekPoint> = OnceLock::new();
/// Monero's alternate generator `H`, used for amounts in Pedersen commitments.
#[allow(non_snake_case)]
pub fn H() -> DalekPoint {
*H_CELL.get_or_init(|| {
decompress_point(hash(&EdwardsPoint::generator().to_bytes())).unwrap().mul_by_cofactor()
})
}
static H_POW_2_CELL: OnceLock<[DalekPoint; 64]> = OnceLock::new();
/// Monero's alternate generator `H`, multiplied by 2**i for i in 1 ..= 64.
#[allow(non_snake_case)]
pub fn H_pow_2() -> &'static [DalekPoint; 64] {
H_POW_2_CELL.get_or_init(|| {
let mut res = [H(); 64];
for i in 1 .. 64 {
res[i] = res[i - 1] + res[i - 1];
}
res
})
}
const MAX_M: usize = 16;
const N: usize = 64;
const MAX_MN: usize = MAX_M * N;
/// Container struct for Bulletproofs(+) generators.
#[allow(non_snake_case)]
pub struct Generators {
pub G: Vec<EdwardsPoint>,
pub H: Vec<EdwardsPoint>,
}
/// Generate generators as needed for Bulletproofs(+), as Monero does.
pub fn bulletproofs_generators(dst: &'static [u8]) -> Generators {
let mut res = Generators { G: Vec::with_capacity(MAX_MN), H: Vec::with_capacity(MAX_MN) };
for i in 0 .. MAX_MN {
let i = 2 * i;
let mut even = H().compress().to_bytes().to_vec();
even.extend(dst);
let mut odd = even.clone();
write_varint(&i.try_into().unwrap(), &mut even).unwrap();
write_varint(&(i + 1).try_into().unwrap(), &mut odd).unwrap();
res.H.push(EdwardsPoint(hash_to_point(hash(&even))));
res.G.push(EdwardsPoint(hash_to_point(hash(&odd))));
}
res
}

View File

@@ -1 +0,0 @@
mod hash_to_point;

View File

@@ -1,16 +0,0 @@
use std_shims::io::{self, Write};
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {
let mut varint = *varint;
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
if varint != 0 {
b |= VARINT_CONTINUATION_MASK;
}
w.write_all(&[b])?;
varint != 0
} {}
Ok(())
}

View File

@@ -1,321 +0,0 @@
#[cfg(feature = "binaries")]
mod binaries {
pub(crate) use std::sync::Arc;
pub(crate) use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
pub(crate) use multiexp::BatchVerifier;
pub(crate) use serde::Deserialize;
pub(crate) use serde_json::json;
pub(crate) use monero_serai::{
Commitment,
ringct::RctPrunable,
transaction::{Input, Transaction},
block::Block,
rpc::{RpcError, Rpc, HttpRpc},
};
pub(crate) use monero_generators::decompress_point;
pub(crate) use tokio::task::JoinHandle;
pub(crate) async fn check_block(rpc: Arc<Rpc<HttpRpc>>, block_i: usize) {
let hash = loop {
match rpc.get_block_hash(block_i).await {
Ok(hash) => break hash,
Err(RpcError::ConnectionError(e)) => {
println!("get_block_hash ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i}'s hash: {e:?}"),
}
};
// TODO: Grab the JSON to also check it was deserialized correctly
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse = loop {
match rpc.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await {
Ok(res) => break res,
Err(RpcError::ConnectionError(e)) => {
println!("get_block ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i} via block.hash(): {e:?}"),
}
};
let blob = hex::decode(res.blob).expect("node returned non-hex block");
let block = Block::read(&mut blob.as_slice())
.unwrap_or_else(|e| panic!("couldn't deserialize block {block_i}: {e}"));
assert_eq!(block.hash(), hash, "hash differs");
assert_eq!(block.serialize(), blob, "serialization differs");
let txs_len = 1 + block.txs.len();
if !block.txs.is_empty() {
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
let mut hashes_hex = block.txs.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = vec![];
while !hashes_hex.is_empty() {
let txs: TransactionsResponse = loop {
match rpc
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. hashes_hex.len().min(100)).collect::<Vec<_>>(),
})),
)
.await
{
Ok(txs) => break txs,
Err(RpcError::ConnectionError(e)) => {
println!("get_transactions ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't call get_transactions: {e:?}"),
}
};
assert!(txs.missed_tx.is_empty());
all_txs.extend(txs.txs);
}
let mut batch = BatchVerifier::new(block.txs.len());
for (tx_hash, tx_res) in block.txs.into_iter().zip(all_txs) {
assert_eq!(
tx_res.tx_hash,
hex::encode(tx_hash),
"node returned a transaction with different hash"
);
let tx = Transaction::read(
&mut hex::decode(&tx_res.as_hex).expect("node returned non-hex transaction").as_slice(),
)
.expect("couldn't deserialize transaction");
assert_eq!(
hex::encode(tx.serialize()),
tx_res.as_hex,
"Transaction serialization was different"
);
assert_eq!(tx.hash(), tx_hash, "Transaction hash was different");
if matches!(tx.rct_signatures.prunable, RctPrunable::Null) {
assert_eq!(tx.prefix.version, 1);
assert!(!tx.signatures.is_empty());
continue;
}
let sig_hash = tx.signature_hash();
// Verify all proofs we support proving for
// This is due to having debug_asserts calling verify within their proving, and CLSAG
// multisig explicitly calling verify as part of its signing process
// Accordingly, making sure our signature_hash algorithm is correct is great, and further
// making sure the verification functions are valid is appreciated
match tx.rct_signatures.prunable {
RctPrunable::Null |
RctPrunable::AggregateMlsagBorromean { .. } |
RctPrunable::MlsagBorromean { .. } => {}
RctPrunable::MlsagBulletproofs { bulletproofs, .. } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
}
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
for (i, clsag) in clsags.into_iter().enumerate() {
let (amount, key_offsets, image) = match &tx.prefix.inputs[i] {
Input::Gen(_) => panic!("Input::Gen"),
Input::ToKey { amount, key_offsets, key_image } => (amount, key_offsets, key_image),
};
let mut running_sum = 0;
let mut actual_indexes = vec![];
for offset in key_offsets {
running_sum += offset;
actual_indexes.push(running_sum);
}
async fn get_outs(
rpc: &Rpc<HttpRpc>,
amount: u64,
indexes: &[u64],
) -> Vec<[EdwardsPoint; 2]> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = loop {
match rpc
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": amount,
"index": o
})).collect::<Vec<_>>()
})),
)
.await
{
Ok(outs) => break outs,
Err(RpcError::ConnectionError(e)) => {
println!("get_outs ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't connect to RPC to get outs: {e:?}"),
}
};
let rpc_point = |point: &str| {
decompress_point(
hex::decode(point)
.expect("invalid hex for ring member")
.try_into()
.expect("invalid point len for ring member"),
)
.expect("invalid point for ring member")
};
outs
.outs
.iter()
.map(|out| {
let mask = rpc_point(&out.mask);
if amount != 0 {
assert_eq!(mask, Commitment::new(Scalar::from(1u8), amount).calculate());
}
[rpc_point(&out.key), mask]
})
.collect()
}
clsag
.verify(
&get_outs(&rpc, amount.unwrap_or(0), &actual_indexes).await,
image,
&pseudo_outs[i],
&sig_hash,
)
.unwrap();
}
}
}
}
assert!(batch.verify_vartime());
}
println!("Deserialized, hashed, and reserialized {block_i} with {txs_len} TXs");
}
}
#[cfg(feature = "binaries")]
#[tokio::main]
async fn main() {
use binaries::*;
let args = std::env::args().collect::<Vec<String>>();
// Read start block as the first arg
let mut block_i = args[1].parse::<usize>().expect("invalid start block");
// How many blocks to work on at once
let async_parallelism: usize =
args.get(2).unwrap_or(&"8".to_string()).parse::<usize>().expect("invalid parallelism argument");
// Read further args as RPC URLs
let default_nodes = vec![
"http://xmr-node.cakewallet.com:18081".to_string(),
"https://node.sethforprivacy.com".to_string(),
];
let mut specified_nodes = vec![];
{
let mut i = 0;
loop {
let Some(node) = args.get(3 + i) else { break };
specified_nodes.push(node.clone());
i += 1;
}
}
let nodes = if specified_nodes.is_empty() { default_nodes } else { specified_nodes };
let rpc = |url: String| async move {
HttpRpc::new(url.clone())
.await
.unwrap_or_else(|_| panic!("couldn't create HttpRpc connected to {url}"))
};
let main_rpc = rpc(nodes[0].clone()).await;
let mut rpcs = vec![];
for i in 0 .. async_parallelism {
rpcs.push(Arc::new(rpc(nodes[i % nodes.len()].clone()).await));
}
let mut rpc_i = 0;
let mut handles: Vec<JoinHandle<()>> = vec![];
let mut height = 0;
loop {
let new_height = main_rpc.get_height().await.expect("couldn't call get_height");
if new_height == height {
break;
}
height = new_height;
while block_i < height {
if handles.len() >= async_parallelism {
// Guarantee one handle is complete
handles.swap_remove(0).await.unwrap();
// Remove all of the finished handles
let mut i = 0;
while i < handles.len() {
if handles[i].is_finished() {
handles.swap_remove(i).await.unwrap();
continue;
}
i += 1;
}
}
handles.push(tokio::spawn(check_block(rpcs[rpc_i].clone(), block_i)));
rpc_i = (rpc_i + 1) % rpcs.len();
block_i += 1;
}
}
}
#[cfg(not(feature = "binaries"))]
fn main() {
panic!("To run binaries, please build with `--feature binaries`.");
}

View File

@@ -1,130 +0,0 @@
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use crate::{
hash,
merkle::merkle_root,
serialize::*,
transaction::{Input, Transaction},
};
const CORRECT_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("426d16cff04c71f8b16340b722dc4010a2dd3831c22041431f772547ba6e331a");
const EXISTING_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("bbd604d2ba11ba27935e006ed39c9bfdd99b76bf4a50654bc1e1e61217962698");
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BlockHeader {
pub major_version: u8,
pub minor_version: u8,
pub timestamp: u64,
pub previous: [u8; 32],
pub nonce: u32,
}
impl BlockHeader {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.major_version, w)?;
write_varint(&self.minor_version, w)?;
write_varint(&self.timestamp, w)?;
w.write_all(&self.previous)?;
w.write_all(&self.nonce.to_le_bytes())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<BlockHeader> {
Ok(BlockHeader {
major_version: read_varint(r)?,
minor_version: read_varint(r)?,
timestamp: read_varint(r)?,
previous: read_bytes(r)?,
nonce: read_bytes(r).map(u32::from_le_bytes)?,
})
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Block {
pub header: BlockHeader,
pub miner_tx: Transaction,
pub txs: Vec<[u8; 32]>,
}
impl Block {
pub fn number(&self) -> Option<u64> {
match self.miner_tx.prefix.inputs.first() {
Some(Input::Gen(number)) => Some(*number),
_ => None,
}
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.header.write(w)?;
self.miner_tx.write(w)?;
write_varint(&self.txs.len(), w)?;
for tx in &self.txs {
w.write_all(tx)?;
}
Ok(())
}
fn tx_merkle_root(&self) -> [u8; 32] {
merkle_root(self.miner_tx.hash(), &self.txs)
}
/// Serialize the block as required for the proof of work hash.
///
/// This is distinct from the serialization required for the block hash. To get the block hash,
/// use the [`Block::hash`] function.
pub fn serialize_hashable(&self) -> Vec<u8> {
let mut blob = self.header.serialize();
blob.extend_from_slice(&self.tx_merkle_root());
write_varint(&(1 + u64::try_from(self.txs.len()).unwrap()), &mut blob).unwrap();
blob
}
pub fn hash(&self) -> [u8; 32] {
let mut hashable = self.serialize_hashable();
// Monero pre-appends a VarInt of the block hashing blobs length before getting the block hash
// but doesn't do this when getting the proof of work hash :)
let mut hashing_blob = Vec::with_capacity(8 + hashable.len());
write_varint(&u64::try_from(hashable.len()).unwrap(), &mut hashing_blob).unwrap();
hashing_blob.append(&mut hashable);
let hash = hash(&hashing_blob);
if hash == CORRECT_BLOCK_HASH_202612 {
return EXISTING_BLOCK_HASH_202612;
};
hash
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Block> {
let header = BlockHeader::read(r)?;
let miner_tx = Transaction::read(r)?;
if !matches!(miner_tx.prefix.inputs.as_slice(), &[Input::Gen(_)]) {
Err(io::Error::other("Miner transaction has incorrect input type."))?;
}
Ok(Block {
header,
miner_tx,
txs: (0_usize .. read_varint(r)?).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
})
}
}

View File

@@ -1,229 +0,0 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
#[macro_use]
extern crate alloc;
use std_shims::{sync::OnceLock, io};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use sha3::{Digest, Keccak256};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub use monero_generators::{H, decompress_point};
mod merkle;
mod serialize;
use serialize::{read_byte, read_u16};
/// UnreducedScalar struct with functionality for recovering incorrectly reduced scalars.
mod unreduced_scalar;
/// Ring Signature structs and functionality.
pub mod ring_signatures;
/// RingCT structs and functionality.
pub mod ringct;
use ringct::RctType;
/// Transaction structs.
pub mod transaction;
/// Block structs.
pub mod block;
/// Monero daemon RPC interface.
pub mod rpc;
/// Wallet functionality, enabling scanning and sending transactions.
pub mod wallet;
#[cfg(test)]
mod tests;
pub const DEFAULT_LOCK_WINDOW: usize = 10;
pub const COINBASE_LOCK_WINDOW: usize = 60;
pub const BLOCK_TIME: usize = 120;
static INV_EIGHT_CELL: OnceLock<Scalar> = OnceLock::new();
#[allow(non_snake_case)]
pub(crate) fn INV_EIGHT() -> Scalar {
*INV_EIGHT_CELL.get_or_init(|| Scalar::from(8u8).invert())
}
/// Monero protocol version.
///
/// v15 is omitted as v15 was simply v14 and v16 being active at the same time, with regards to the
/// transactions supported. Accordingly, v16 should be used during v15.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[allow(non_camel_case_types)]
pub enum Protocol {
v14,
v16,
Custom {
ring_len: usize,
bp_plus: bool,
optimal_rct_type: RctType,
view_tags: bool,
v16_fee: bool,
},
}
impl Protocol {
/// Amount of ring members under this protocol version.
pub fn ring_len(&self) -> usize {
match self {
Protocol::v14 => 11,
Protocol::v16 => 16,
Protocol::Custom { ring_len, .. } => *ring_len,
}
}
/// Whether or not the specified version uses Bulletproofs or Bulletproofs+.
///
/// This method will likely be reworked when versions not using Bulletproofs at all are added.
pub fn bp_plus(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { bp_plus, .. } => *bp_plus,
}
}
// TODO: Make this an Option when we support pre-RCT protocols
pub fn optimal_rct_type(&self) -> RctType {
match self {
Protocol::v14 => RctType::Clsag,
Protocol::v16 => RctType::BulletproofsPlus,
Protocol::Custom { optimal_rct_type, .. } => *optimal_rct_type,
}
}
/// Whether or not the specified version uses view tags.
pub fn view_tags(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { view_tags, .. } => *view_tags,
}
}
/// Whether or not the specified version uses the fee algorithm from Monero
/// hard fork version 16 (released in v18 binaries).
pub fn v16_fee(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { v16_fee, .. } => *v16_fee,
}
}
pub(crate) fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Protocol::v14 => w.write_all(&[0, 14]),
Protocol::v16 => w.write_all(&[0, 16]),
Protocol::Custom { ring_len, bp_plus, optimal_rct_type, view_tags, v16_fee } => {
// Custom, version 0
w.write_all(&[1, 0])?;
w.write_all(&u16::try_from(*ring_len).unwrap().to_le_bytes())?;
w.write_all(&[u8::from(*bp_plus)])?;
w.write_all(&[optimal_rct_type.to_byte()])?;
w.write_all(&[u8::from(*view_tags)])?;
w.write_all(&[u8::from(*v16_fee)])
}
}
}
pub(crate) fn read<R: io::Read>(r: &mut R) -> io::Result<Protocol> {
Ok(match read_byte(r)? {
// Monero protocol
0 => match read_byte(r)? {
14 => Protocol::v14,
16 => Protocol::v16,
_ => Err(io::Error::other("unrecognized monero protocol"))?,
},
// Custom
1 => match read_byte(r)? {
0 => Protocol::Custom {
ring_len: read_u16(r)?.into(),
bp_plus: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
optimal_rct_type: RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::other("invalid RctType serialization"))?,
view_tags: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
v16_fee: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
},
_ => Err(io::Error::other("unrecognized custom protocol serialization"))?,
},
_ => Err(io::Error::other("unrecognized protocol serialization"))?,
})
}
}
/// Transparent structure representing a Pedersen commitment's contents.
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub struct Commitment {
pub mask: Scalar,
pub amount: u64,
}
impl core::fmt::Debug for Commitment {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt.debug_struct("Commitment").field("amount", &self.amount).finish_non_exhaustive()
}
}
impl Commitment {
/// A commitment to zero, defined with a mask of 1 (as to not be the identity).
pub fn zero() -> Commitment {
Commitment { mask: Scalar::ONE, amount: 0 }
}
pub fn new(mask: Scalar, amount: u64) -> Commitment {
Commitment { mask, amount }
}
/// Calculate a Pedersen commitment, as a point, from the transparent structure.
pub fn calculate(&self) -> EdwardsPoint {
(&self.mask * ED25519_BASEPOINT_TABLE) + (Scalar::from(self.amount) * H())
}
}
/// Support generating a random scalar using a modern rand, as dalek's is notoriously dated.
pub fn random_scalar<R: RngCore + CryptoRng>(rng: &mut R) -> Scalar {
let mut r = [0; 64];
rng.fill_bytes(&mut r);
Scalar::from_bytes_mod_order_wide(&r)
}
pub(crate) fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
/// Hash the provided data to a scalar via keccak256(data) % l.
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
let scalar = Scalar::from_bytes_mod_order(hash(data));
// Monero will explicitly error in this case
// This library acknowledges its practical impossibility of it occurring, and doesn't bother to
// code in logic to handle it. That said, if it ever occurs, something must happen in order to
// not generate/verify a proof we believe to be valid when it isn't
assert!(scalar != Scalar::ZERO, "ZERO HASH: {data:?}");
scalar
}

View File

@@ -1,72 +0,0 @@
use std_shims::{
io::{self, *},
vec::Vec,
};
use zeroize::Zeroize;
use curve25519_dalek::{EdwardsPoint, Scalar};
use monero_generators::hash_to_point;
use crate::{serialize::*, hash_to_scalar};
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Signature {
c: Scalar,
r: Scalar,
}
impl Signature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_scalar(&self.c, w)?;
write_scalar(&self.r, w)?;
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Signature> {
Ok(Signature { c: read_scalar(r)?, r: read_scalar(r)? })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct RingSignature {
sigs: Vec<Signature>,
}
impl RingSignature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for sig in &self.sigs {
sig.write(w)?;
}
Ok(())
}
pub fn read<R: Read>(members: usize, r: &mut R) -> io::Result<RingSignature> {
Ok(RingSignature { sigs: read_raw_vec(Signature::read, members, r)? })
}
pub fn verify(&self, msg: &[u8; 32], ring: &[EdwardsPoint], key_image: &EdwardsPoint) -> bool {
if ring.len() != self.sigs.len() {
return false;
}
let mut buf = Vec::with_capacity(32 + (32 * 2 * ring.len()));
buf.extend_from_slice(msg);
let mut sum = Scalar::ZERO;
for (ring_member, sig) in ring.iter().zip(&self.sigs) {
#[allow(non_snake_case)]
let Li = EdwardsPoint::vartime_double_scalar_mul_basepoint(&sig.c, ring_member, &sig.r);
buf.extend_from_slice(Li.compress().as_bytes());
#[allow(non_snake_case)]
let Ri = (sig.r * hash_to_point(ring_member.compress().to_bytes())) + (sig.c * key_image);
buf.extend_from_slice(Ri.compress().as_bytes());
sum += sig.c;
}
sum == hash_to_scalar(&buf)
}
}

View File

@@ -1,151 +0,0 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use subtle::{Choice, ConditionallySelectable};
use curve25519_dalek::edwards::EdwardsPoint as DalekPoint;
use group::{ff::Field, Group};
use dalek_ff_group::{Scalar, EdwardsPoint};
use multiexp::multiexp as multiexp_const;
pub(crate) use monero_generators::Generators;
use crate::{INV_EIGHT as DALEK_INV_EIGHT, H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::*;
#[inline]
pub(crate) fn INV_EIGHT() -> Scalar {
Scalar(DALEK_INV_EIGHT())
}
#[inline]
pub(crate) fn H() -> EdwardsPoint {
EdwardsPoint(DALEK_H())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
// Components common between variants
pub(crate) const MAX_M: usize = 16;
pub(crate) const LOG_N: usize = 6; // 2 << 6 == N
pub(crate) const N: usize = 64;
pub(crate) fn prove_multiexp(pairs: &[(Scalar, EdwardsPoint)]) -> EdwardsPoint {
multiexp_const(pairs) * INV_EIGHT()
}
pub(crate) fn vector_exponent(
generators: &Generators,
a: &ScalarVector,
b: &ScalarVector,
) -> EdwardsPoint {
debug_assert_eq!(a.len(), b.len());
(a * &generators.G[.. a.len()]) + (b * &generators.H[.. b.len()])
}
pub(crate) fn hash_cache(cache: &mut Scalar, mash: &[[u8; 32]]) -> Scalar {
let slice =
&[cache.to_bytes().as_ref(), mash.iter().copied().flatten().collect::<Vec<_>>().as_ref()]
.concat();
*cache = hash_to_scalar(slice);
*cache
}
pub(crate) fn MN(outputs: usize) -> (usize, usize, usize) {
let mut logM = 0;
let mut M;
while {
M = 1 << logM;
(M <= MAX_M) && (M < outputs)
} {
logM += 1;
}
(logM + LOG_N, M, M * N)
}
pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, ScalarVector) {
let (_, M, MN) = MN(commitments.len());
let sv = commitments.iter().map(|c| Scalar::from(c.amount)).collect::<Vec<_>>();
let mut aL = ScalarVector::new(MN);
let mut aR = ScalarVector::new(MN);
for j in 0 .. M {
for i in (0 .. N).rev() {
let bit =
if j < sv.len() { Choice::from((sv[j][i / 8] >> (i % 8)) & 1) } else { Choice::from(0) };
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::ZERO, &Scalar::ONE, bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::ONE, &Scalar::ZERO, bit);
}
}
(aL, aR)
}
pub(crate) fn hash_commitments<C: IntoIterator<Item = DalekPoint>>(
commitments: C,
) -> (Scalar, Vec<EdwardsPoint>) {
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * INV_EIGHT()).collect::<Vec<_>>();
(hash_to_scalar(&V.iter().flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>()), V)
}
pub(crate) fn alpha_rho<R: RngCore + CryptoRng>(
rng: &mut R,
generators: &Generators,
aL: &ScalarVector,
aR: &ScalarVector,
) -> (Scalar, EdwardsPoint) {
let ar = Scalar::random(rng);
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * INV_EIGHT())
}
pub(crate) fn LR_statements(
a: &ScalarVector,
G_i: &[EdwardsPoint],
b: &ScalarVector,
H_i: &[EdwardsPoint],
cL: Scalar,
U: EdwardsPoint,
) -> Vec<(Scalar, EdwardsPoint)> {
let mut res = a
.0
.iter()
.copied()
.zip(G_i.iter().copied())
.chain(b.0.iter().copied().zip(H_i.iter().copied()))
.collect::<Vec<_>>();
res.push((cL, U));
res
}
static TWO_N_CELL: OnceLock<ScalarVector> = OnceLock::new();
pub(crate) fn TWO_N() -> &'static ScalarVector {
TWO_N_CELL.get_or_init(|| ScalarVector::powers(Scalar::from(2u8), N))
}
pub(crate) fn challenge_products(w: &[Scalar], winv: &[Scalar]) -> Vec<Scalar> {
let mut products = vec![Scalar::ZERO; 1 << w.len()];
products[0] = winv[0];
products[1] = w[0];
for j in 1 .. w.len() {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * w[j];
products[slots - 1] = products[slots / 2] * winv[j];
slots = slots.saturating_sub(2);
}
}
// Sanity check as if the above failed to populate, it'd be critical
for w in &products {
debug_assert!(!bool::from(w.is_zero()));
}
products
}

View File

@@ -1,229 +0,0 @@
#![allow(non_snake_case)]
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::edwards::EdwardsPoint;
use multiexp::BatchVerifier;
use crate::{Commitment, wallet::TransactionError, serialize::*};
pub(crate) mod scalar_vector;
pub(crate) mod core;
use self::core::LOG_N;
pub(crate) mod original;
use self::original::OriginalStruct;
pub(crate) mod plus;
use self::plus::*;
pub(crate) const MAX_OUTPUTS: usize = self::core::MAX_M;
/// Bulletproofs enum, supporting the original and plus formulations.
#[allow(clippy::large_enum_variant)]
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Bulletproofs {
Original(OriginalStruct),
Plus(AggregateRangeProof),
}
impl Bulletproofs {
fn bp_fields(plus: bool) -> usize {
if plus {
6
} else {
9
}
}
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L106-L124
pub(crate) fn calculate_bp_clawback(plus: bool, n_outputs: usize) -> (usize, usize) {
#[allow(non_snake_case)]
let mut LR_len = 0;
let mut n_padded_outputs = 1;
while n_padded_outputs < n_outputs {
LR_len += 1;
n_padded_outputs = 1 << LR_len;
}
LR_len += LOG_N;
let mut bp_clawback = 0;
if n_padded_outputs > 2 {
let fields = Bulletproofs::bp_fields(plus);
let base = ((fields + (2 * (LOG_N + 1))) * 32) / 2;
let size = (fields + (2 * LR_len)) * 32;
bp_clawback = ((base * n_padded_outputs) - size) * 4 / 5;
}
(bp_clawback, LR_len)
}
pub(crate) fn fee_weight(plus: bool, outputs: usize) -> usize {
#[allow(non_snake_case)]
let (bp_clawback, LR_len) = Bulletproofs::calculate_bp_clawback(plus, outputs);
32 * (Bulletproofs::bp_fields(plus) + (2 * LR_len)) + 2 + bp_clawback
}
/// Prove the list of commitments are within [0 .. 2^64).
pub fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
outputs: &[Commitment],
plus: bool,
) -> Result<Bulletproofs, TransactionError> {
if outputs.is_empty() {
Err(TransactionError::NoOutputs)?;
}
if outputs.len() > MAX_OUTPUTS {
Err(TransactionError::TooManyOutputs)?;
}
Ok(if !plus {
Bulletproofs::Original(OriginalStruct::prove(rng, outputs))
} else {
use dalek_ff_group::EdwardsPoint as DfgPoint;
Bulletproofs::Plus(
AggregateRangeStatement::new(outputs.iter().map(|com| DfgPoint(com.calculate())).collect())
.unwrap()
.prove(rng, &Zeroizing::new(AggregateRangeWitness::new(outputs).unwrap()))
.unwrap(),
)
})
}
/// Verify the given Bulletproofs.
#[must_use]
pub fn verify<R: RngCore + CryptoRng>(&self, rng: &mut R, commitments: &[EdwardsPoint]) -> bool {
match self {
Bulletproofs::Original(bp) => bp.verify(rng, commitments),
Bulletproofs::Plus(bp) => {
let mut verifier = BatchVerifier::new(1);
// If this commitment is torsioned (which is allowed), this won't be a well-formed
// dfg::EdwardsPoint (expected to be of prime-order)
// The actual BP+ impl will perform a torsion clear though, making this safe
// TODO: Have AggregateRangeStatement take in dalek EdwardsPoint for clarity on this
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
if !statement.verify(rng, &mut verifier, (), bp.clone()) {
return false;
}
verifier.verify_vartime()
}
}
}
/// Accumulate the verification for the given Bulletproofs into the specified BatchVerifier.
/// Returns false if the Bulletproofs aren't sane, without mutating the BatchVerifier.
/// Returns true if the Bulletproofs are sane, regardless of their validity.
#[must_use]
pub fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, dalek_ff_group::EdwardsPoint>,
id: ID,
commitments: &[EdwardsPoint],
) -> bool {
match self {
Bulletproofs::Original(bp) => bp.batch_verify(rng, verifier, id, commitments),
Bulletproofs::Plus(bp) => {
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
statement.verify(rng, verifier, id, bp.clone())
}
}
}
fn write_core<W: Write, F: Fn(&[EdwardsPoint], &mut W) -> io::Result<()>>(
&self,
w: &mut W,
specific_write_vec: F,
) -> io::Result<()> {
match self {
Bulletproofs::Original(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.S, w)?;
write_point(&bp.T1, w)?;
write_point(&bp.T2, w)?;
write_scalar(&bp.taux, w)?;
write_scalar(&bp.mu, w)?;
specific_write_vec(&bp.L, w)?;
specific_write_vec(&bp.R, w)?;
write_scalar(&bp.a, w)?;
write_scalar(&bp.b, w)?;
write_scalar(&bp.t, w)
}
Bulletproofs::Plus(bp) => {
write_point(&bp.A.0, w)?;
write_point(&bp.wip.A.0, w)?;
write_point(&bp.wip.B.0, w)?;
write_scalar(&bp.wip.r_answer.0, w)?;
write_scalar(&bp.wip.s_answer.0, w)?;
write_scalar(&bp.wip.delta_answer.0, w)?;
specific_write_vec(&bp.wip.L.iter().copied().map(|L| L.0).collect::<Vec<_>>(), w)?;
specific_write_vec(&bp.wip.R.iter().copied().map(|R| R.0).collect::<Vec<_>>(), w)
}
}
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_raw_vec(write_point, points, w))
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_vec(write_point, points, w))
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
/// Read Bulletproofs.
pub fn read<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
Ok(Bulletproofs::Original(OriginalStruct {
A: read_point(r)?,
S: read_point(r)?,
T1: read_point(r)?,
T2: read_point(r)?,
taux: read_scalar(r)?,
mu: read_scalar(r)?,
L: read_vec(read_point, r)?,
R: read_vec(read_point, r)?,
a: read_scalar(r)?,
b: read_scalar(r)?,
t: read_scalar(r)?,
}))
}
/// Read Bulletproofs+.
pub fn read_plus<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
use dalek_ff_group::{Scalar as DfgScalar, EdwardsPoint as DfgPoint};
Ok(Bulletproofs::Plus(AggregateRangeProof {
A: DfgPoint(read_point(r)?),
wip: WipProof {
A: DfgPoint(read_point(r)?),
B: DfgPoint(read_point(r)?),
r_answer: DfgScalar(read_scalar(r)?),
s_answer: DfgScalar(read_scalar(r)?),
delta_answer: DfgScalar(read_scalar(r)?),
L: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
R: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
},
}))
}
}

View File

@@ -1,309 +0,0 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
use curve25519_dalek::{scalar::Scalar as DalekScalar, edwards::EdwardsPoint as DalekPoint};
use group::{ff::Field, Group};
use dalek_ff_group::{ED25519_BASEPOINT_POINT as G, Scalar, EdwardsPoint};
use multiexp::BatchVerifier;
use crate::{Commitment, ringct::bulletproofs::core::*};
include!(concat!(env!("OUT_DIR"), "/generators.rs"));
static IP12_CELL: OnceLock<Scalar> = OnceLock::new();
pub(crate) fn IP12() -> Scalar {
*IP12_CELL.get_or_init(|| inner_product(&ScalarVector(vec![Scalar::ONE; N]), TWO_N()))
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct OriginalStruct {
pub(crate) A: DalekPoint,
pub(crate) S: DalekPoint,
pub(crate) T1: DalekPoint,
pub(crate) T2: DalekPoint,
pub(crate) taux: DalekScalar,
pub(crate) mu: DalekScalar,
pub(crate) L: Vec<DalekPoint>,
pub(crate) R: Vec<DalekPoint>,
pub(crate) a: DalekScalar,
pub(crate) b: DalekScalar,
pub(crate) t: DalekScalar,
}
impl OriginalStruct {
pub(crate) fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
commitments: &[Commitment],
) -> OriginalStruct {
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
let commitments_points = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
let (mut cache, _) = hash_commitments(commitments_points.clone());
let (sL, sR) =
ScalarVector((0 .. (MN * 2)).map(|_| Scalar::random(&mut *rng)).collect::<Vec<_>>()).split();
let generators = GENERATORS();
let (mut alpha, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, generators, &sL, &sR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes(), S.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
let z = cache;
let l0 = &aL - z;
let l1 = sL;
let mut zero_twos = Vec::with_capacity(MN);
let zpow = ScalarVector::powers(z, M + 2);
for j in 0 .. M {
for i in 0 .. N {
zero_twos.push(zpow[j + 2] * TWO_N()[i]);
}
}
let yMN = ScalarVector::powers(y, MN);
let r0 = (&(aR + z) * &yMN) + ScalarVector(zero_twos);
let r1 = yMN * sR;
let (T1, T2, x, mut taux) = {
let t1 = inner_product(&l0, &r1) + inner_product(&l1, &r0);
let t2 = inner_product(&l1, &r1);
let mut tau1 = Scalar::random(&mut *rng);
let mut tau2 = Scalar::random(&mut *rng);
let T1 = prove_multiexp(&[(t1, H()), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, H()), (tau2, EdwardsPoint::generator())]);
let x =
hash_cache(&mut cache, &[z.to_bytes(), T1.compress().to_bytes(), T2.compress().to_bytes()]);
let taux = (tau2 * (x * x)) + (tau1 * x);
tau1.zeroize();
tau2.zeroize();
(T1, T2, x, taux)
};
let mu = (x * rho) + alpha;
alpha.zeroize();
rho.zeroize();
for (i, gamma) in commitments.iter().map(|c| Scalar(c.mask)).enumerate() {
taux += zpow[i + 2] * gamma;
}
let l = &l0 + &(l1 * x);
let r = &r0 + &(r1 * x);
let t = inner_product(&l, &r);
let x_ip =
hash_cache(&mut cache, &[x.to_bytes(), taux.to_bytes(), mu.to_bytes(), t.to_bytes()]);
let mut a = l;
let mut b = r;
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
H_proof.iter_mut().zip(yinvpow.0.iter()).for_each(|(this_H, yinvpow)| *this_H *= yinvpow);
let U = H() * x_ip;
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
while a.len() != 1 {
let (aL, aR) = a.split();
let (bL, bR) = b.split();
let cL = inner_product(&aL, &bR);
let cR = inner_product(&aR, &bL);
let (G_L, G_R) = G_proof.split_at(aL.len());
let (H_L, H_R) = H_proof.split_at(aL.len());
let L_i = prove_multiexp(&LR_statements(&aL, G_R, &bR, H_L, cL, U));
let R_i = prove_multiexp(&LR_statements(&aR, G_L, &bL, H_R, cR, U));
L.push(L_i);
R.push(R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
a = (aL * w) + (aR * winv);
b = (bL * winv) + (bR * w);
if a.len() != 1 {
G_proof = hadamard_fold(G_L, G_R, winv, w);
H_proof = hadamard_fold(H_L, H_R, w, winv);
}
}
let res = OriginalStruct {
A: *A,
S: *S,
T1: *T1,
T2: *T2,
taux: *taux,
mu: *mu,
L: L.drain(..).map(|L| *L).collect(),
R: R.drain(..).map(|R| *R).collect(),
a: *a[0],
b: *b[0],
t: *t,
};
debug_assert!(res.verify(rng, &commitments_points));
res
}
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
// Verify commitments are valid
if commitments.is_empty() || (commitments.len() > MAX_M) {
return false;
}
// Verify L and R are properly sized
if self.L.len() != self.R.len() {
return false;
}
let (logMN, M, MN) = MN(commitments.len());
if self.L.len() != logMN {
return false;
}
// Rebuild all challenges
let (mut cache, commitments) = hash_commitments(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes(), self.S.compress().to_bytes()]);
let z = hash_to_scalar(&y.to_bytes());
cache = z;
let x = hash_cache(
&mut cache,
&[z.to_bytes(), self.T1.compress().to_bytes(), self.T2.compress().to_bytes()],
);
let x_ip = hash_cache(
&mut cache,
&[x.to_bytes(), self.taux.to_bytes(), self.mu.to_bytes(), self.t.to_bytes()],
);
let mut w = Vec::with_capacity(logMN);
let mut winv = Vec::with_capacity(logMN);
for (L, R) in self.L.iter().zip(&self.R) {
w.push(hash_cache(&mut cache, &[L.compress().to_bytes(), R.compress().to_bytes()]));
winv.push(cache.invert().unwrap());
}
// Convert the proof from * INV_EIGHT to its actual form
let normalize = |point: &DalekPoint| EdwardsPoint(point.mul_by_cofactor());
let L = self.L.iter().map(normalize).collect::<Vec<_>>();
let R = self.R.iter().map(normalize).collect::<Vec<_>>();
let T1 = normalize(&self.T1);
let T2 = normalize(&self.T2);
let A = normalize(&self.A);
let S = normalize(&self.S);
let commitments = commitments.iter().map(EdwardsPoint::mul_by_cofactor).collect::<Vec<_>>();
// Verify it
let mut proof = Vec::with_capacity(4 + commitments.len());
let zpow = ScalarVector::powers(z, M + 3);
let ip1y = ScalarVector::powers(y, M * N).sum();
let mut k = -(zpow[2] * ip1y);
for j in 1 ..= M {
k -= zpow[j + 2] * IP12();
}
let y1 = Scalar(self.t) - ((z * ip1y) + k);
proof.push((-y1, H()));
proof.push((-Scalar(self.taux), G));
for (j, commitment) in commitments.iter().enumerate() {
proof.push((zpow[j + 2], *commitment));
}
proof.push((x, T1));
proof.push((x * x, T2));
verifier.queue(&mut *rng, id, proof);
proof = Vec::with_capacity(4 + (2 * (MN + logMN)));
let z3 = (Scalar(self.t) - (Scalar(self.a) * Scalar(self.b))) * x_ip;
proof.push((z3, H()));
proof.push((-Scalar(self.mu), G));
proof.push((Scalar::ONE, A));
proof.push((x, S));
{
let ypow = ScalarVector::powers(y, MN);
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let w_cache = challenge_products(&w, &winv);
let generators = GENERATORS();
for i in 0 .. MN {
let g = (Scalar(self.a) * w_cache[i]) + z;
proof.push((-g, generators.G[i]));
let mut h = Scalar(self.b) * yinvpow[i] * w_cache[(!i) & (MN - 1)];
h -= ((zpow[(i / N) + 2] * TWO_N()[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, generators.H[i]));
}
}
for i in 0 .. logMN {
proof.push((w[i] * w[i], L[i]));
proof.push((winv[i] * winv[i], R[i]));
}
verifier.queue(rng, id, proof);
true
}
#[must_use]
pub(crate) fn verify<R: RngCore + CryptoRng>(
&self,
rng: &mut R,
commitments: &[DalekPoint],
) -> bool {
let mut verifier = BatchVerifier::new(1);
if self.verify_core(rng, &mut verifier, (), commitments) {
verifier.verify_vartime()
} else {
false
}
}
#[must_use]
pub(crate) fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
self.verify_core(rng, verifier, id, commitments)
}
}

View File

@@ -1,114 +0,0 @@
use core::{
borrow::Borrow,
ops::{Index, IndexMut},
};
use std_shims::vec::Vec;
use zeroize::Zeroize;
use group::ff::Field;
use dalek_ff_group::Scalar;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct ScalarVector(pub(crate) Vec<Scalar>);
impl Index<usize> for ScalarVector {
type Output = Scalar;
fn index(&self, index: usize) -> &Scalar {
&self.0[index]
}
}
impl IndexMut<usize> for ScalarVector {
fn index_mut(&mut self, index: usize) -> &mut Scalar {
&mut self.0[index]
}
}
impl ScalarVector {
pub(crate) fn new(len: usize) -> Self {
ScalarVector(vec![Scalar::ZERO; len])
}
pub(crate) fn add(&self, scalar: impl Borrow<Scalar>) -> Self {
let mut res = self.clone();
for val in &mut res.0 {
*val += scalar.borrow();
}
res
}
pub(crate) fn sub(&self, scalar: impl Borrow<Scalar>) -> Self {
let mut res = self.clone();
for val in &mut res.0 {
*val -= scalar.borrow();
}
res
}
pub(crate) fn mul(&self, scalar: impl Borrow<Scalar>) -> Self {
let mut res = self.clone();
for val in &mut res.0 {
*val *= scalar.borrow();
}
res
}
pub(crate) fn add_vec(&self, vector: &Self) -> Self {
debug_assert_eq!(self.len(), vector.len());
let mut res = self.clone();
for (i, val) in res.0.iter_mut().enumerate() {
*val += vector.0[i];
}
res
}
pub(crate) fn mul_vec(&self, vector: &Self) -> Self {
debug_assert_eq!(self.len(), vector.len());
let mut res = self.clone();
for (i, val) in res.0.iter_mut().enumerate() {
*val *= vector.0[i];
}
res
}
pub(crate) fn inner_product(&self, vector: &Self) -> Scalar {
self.mul_vec(vector).sum()
}
pub(crate) fn powers(x: Scalar, len: usize) -> Self {
debug_assert!(len != 0);
let mut res = Vec::with_capacity(len);
res.push(Scalar::ONE);
res.push(x);
for i in 2 .. len {
res.push(res[i - 1] * x);
}
res.truncate(len);
ScalarVector(res)
}
pub(crate) fn sum(mut self) -> Scalar {
self.0.drain(..).sum()
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn split(mut self) -> (Self, Self) {
debug_assert!(self.len() > 1);
let r = self.0.split_off(self.0.len() / 2);
debug_assert_eq!(self.len(), r.len());
(self, ScalarVector(r))
}
}
pub(crate) fn weighted_inner_product(
a: &ScalarVector,
b: &ScalarVector,
y: &ScalarVector,
) -> Scalar {
a.inner_product(&b.mul_vec(y))
}

View File

@@ -1,24 +0,0 @@
use std_shims::{sync::OnceLock, vec::Vec};
use dalek_ff_group::{Scalar, EdwardsPoint};
use monero_generators::{hash_to_point as raw_hash_to_point};
use crate::{hash, hash_to_scalar as dalek_hash};
// Monero starts BP+ transcripts with the following constant.
static TRANSCRIPT_CELL: OnceLock<[u8; 32]> = OnceLock::new();
pub(crate) fn TRANSCRIPT() -> [u8; 32] {
// Why this uses a hash_to_point is completely unknown.
*TRANSCRIPT_CELL
.get_or_init(|| raw_hash_to_point(hash(b"bulletproof_plus_transcript")).compress().to_bytes())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
pub(crate) fn initial_transcript(commitments: core::slice::Iter<'_, EdwardsPoint>) -> Scalar {
let commitments_hash =
hash_to_scalar(&commitments.flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>());
hash_to_scalar(&[TRANSCRIPT().as_ref(), &commitments_hash.to_bytes()].concat())
}

View File

@@ -1,114 +0,0 @@
use core::ops::{Add, Sub, Mul, Index};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
use group::ff::Field;
use dalek_ff_group::{Scalar, EdwardsPoint};
use multiexp::multiexp;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct ScalarVector(pub(crate) Vec<Scalar>);
macro_rules! math_op {
($Op: ident, $op: ident, $f: expr) => {
#[allow(clippy::redundant_closure_call)]
impl $Op<Scalar> for ScalarVector {
type Output = ScalarVector;
fn $op(self, b: Scalar) -> ScalarVector {
ScalarVector(self.0.iter().map(|a| $f((a, &b))).collect())
}
}
#[allow(clippy::redundant_closure_call)]
impl $Op<Scalar> for &ScalarVector {
type Output = ScalarVector;
fn $op(self, b: Scalar) -> ScalarVector {
ScalarVector(self.0.iter().map(|a| $f((a, &b))).collect())
}
}
#[allow(clippy::redundant_closure_call)]
impl $Op<ScalarVector> for ScalarVector {
type Output = ScalarVector;
fn $op(self, b: ScalarVector) -> ScalarVector {
debug_assert_eq!(self.len(), b.len());
ScalarVector(self.0.iter().zip(b.0.iter()).map($f).collect())
}
}
#[allow(clippy::redundant_closure_call)]
impl $Op<&ScalarVector> for &ScalarVector {
type Output = ScalarVector;
fn $op(self, b: &ScalarVector) -> ScalarVector {
debug_assert_eq!(self.len(), b.len());
ScalarVector(self.0.iter().zip(b.0.iter()).map($f).collect())
}
}
};
}
math_op!(Add, add, |(a, b): (&Scalar, &Scalar)| *a + *b);
math_op!(Sub, sub, |(a, b): (&Scalar, &Scalar)| *a - *b);
math_op!(Mul, mul, |(a, b): (&Scalar, &Scalar)| *a * *b);
impl ScalarVector {
pub(crate) fn new(len: usize) -> ScalarVector {
ScalarVector(vec![Scalar::ZERO; len])
}
pub(crate) fn powers(x: Scalar, len: usize) -> ScalarVector {
debug_assert!(len != 0);
let mut res = Vec::with_capacity(len);
res.push(Scalar::ONE);
for i in 1 .. len {
res.push(res[i - 1] * x);
}
ScalarVector(res)
}
pub(crate) fn sum(mut self) -> Scalar {
self.0.drain(..).sum()
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn split(self) -> (ScalarVector, ScalarVector) {
let (l, r) = self.0.split_at(self.0.len() / 2);
(ScalarVector(l.to_vec()), ScalarVector(r.to_vec()))
}
}
impl Index<usize> for ScalarVector {
type Output = Scalar;
fn index(&self, index: usize) -> &Scalar {
&self.0[index]
}
}
pub(crate) fn inner_product(a: &ScalarVector, b: &ScalarVector) -> Scalar {
(a * b).sum()
}
impl Mul<&[EdwardsPoint]> for &ScalarVector {
type Output = EdwardsPoint;
fn mul(self, b: &[EdwardsPoint]) -> EdwardsPoint {
debug_assert_eq!(self.len(), b.len());
multiexp(&self.0.iter().copied().zip(b.iter().copied()).collect::<Vec<_>>())
}
}
pub(crate) fn hadamard_fold(
l: &[EdwardsPoint],
r: &[EdwardsPoint],
a: Scalar,
b: Scalar,
) -> Vec<EdwardsPoint> {
let mut res = Vec::with_capacity(l.len() / 2);
for i in 0 .. l.len() {
res.push(multiexp(&[(a, l[i]), (b, r[i])]));
}
res
}

View File

@@ -1,324 +0,0 @@
#![allow(non_snake_case)]
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use subtle::{ConstantTimeEq, Choice, CtOption};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
traits::{IsIdentity, VartimePrecomputedMultiscalarMul},
edwards::{EdwardsPoint, VartimeEdwardsPrecomputation},
};
use crate::{
INV_EIGHT, Commitment, random_scalar, hash_to_scalar, wallet::decoys::Decoys,
ringct::hash_to_point, serialize::*,
};
#[cfg(feature = "multisig")]
mod multisig;
#[cfg(feature = "multisig")]
pub use multisig::{ClsagDetails, ClsagAddendum, ClsagMultisig};
#[cfg(feature = "multisig")]
pub(crate) use multisig::add_key_image_share;
/// Errors returned when CLSAG signing fails.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum ClsagError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("invalid ring"))]
InvalidRing,
#[cfg_attr(feature = "std", error("invalid ring member (member {0}, ring size {1})"))]
InvalidRingMember(u8, u8),
#[cfg_attr(feature = "std", error("invalid commitment"))]
InvalidCommitment,
#[cfg_attr(feature = "std", error("invalid key image"))]
InvalidImage,
#[cfg_attr(feature = "std", error("invalid D"))]
InvalidD,
#[cfg_attr(feature = "std", error("invalid s"))]
InvalidS,
#[cfg_attr(feature = "std", error("invalid c1"))]
InvalidC1,
}
/// Input being signed for.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ClsagInput {
// The actual commitment for the true spend
pub(crate) commitment: Commitment,
// True spend index, offsets, and ring
pub(crate) decoys: Decoys,
}
impl ClsagInput {
pub fn new(commitment: Commitment, decoys: Decoys) -> Result<ClsagInput, ClsagError> {
let n = decoys.len();
if n > u8::MAX.into() {
Err(ClsagError::InternalError("max ring size in this library is u8 max"))?;
}
let n = u8::try_from(n).unwrap();
if decoys.i >= n {
Err(ClsagError::InvalidRingMember(decoys.i, n))?;
}
// Validate the commitment matches
if decoys.ring[usize::from(decoys.i)][1] != commitment.calculate() {
Err(ClsagError::InvalidCommitment)?;
}
Ok(ClsagInput { commitment, decoys })
}
}
#[allow(clippy::large_enum_variant)]
enum Mode {
Sign(usize, EdwardsPoint, EdwardsPoint),
Verify(Scalar),
}
// Core of the CLSAG algorithm, applicable to both sign and verify with minimal differences
// Said differences are covered via the above Mode
fn core(
ring: &[[EdwardsPoint; 2]],
I: &EdwardsPoint,
pseudo_out: &EdwardsPoint,
msg: &[u8; 32],
D: &EdwardsPoint,
s: &[Scalar],
A_c1: &Mode,
) -> ((EdwardsPoint, Scalar, Scalar), Scalar) {
let n = ring.len();
let images_precomp = VartimeEdwardsPrecomputation::new([I, D]);
let D = D * INV_EIGHT();
// Generate the transcript
// Instead of generating multiple, a single transcript is created and then edited as needed
const PREFIX: &[u8] = b"CLSAG_";
#[rustfmt::skip]
const AGG_0: &[u8] = b"agg_0";
#[rustfmt::skip]
const ROUND: &[u8] = b"round";
const PREFIX_AGG_0_LEN: usize = PREFIX.len() + AGG_0.len();
let mut to_hash = Vec::with_capacity(((2 * n) + 5) * 32);
to_hash.extend(PREFIX);
to_hash.extend(AGG_0);
to_hash.extend([0; 32 - PREFIX_AGG_0_LEN]);
let mut P = Vec::with_capacity(n);
for member in ring {
P.push(member[0]);
to_hash.extend(member[0].compress().to_bytes());
}
let mut C = Vec::with_capacity(n);
for member in ring {
C.push(member[1] - pseudo_out);
to_hash.extend(member[1].compress().to_bytes());
}
to_hash.extend(I.compress().to_bytes());
to_hash.extend(D.compress().to_bytes());
to_hash.extend(pseudo_out.compress().to_bytes());
// mu_P with agg_0
let mu_P = hash_to_scalar(&to_hash);
// mu_C with agg_1
to_hash[PREFIX_AGG_0_LEN - 1] = b'1';
let mu_C = hash_to_scalar(&to_hash);
// Truncate it for the round transcript, altering the DST as needed
to_hash.truncate(((2 * n) + 1) * 32);
for i in 0 .. ROUND.len() {
to_hash[PREFIX.len() + i] = ROUND[i];
}
// Unfortunately, it's I D pseudo_out instead of pseudo_out I D, meaning this needs to be
// truncated just to add it back
to_hash.extend(pseudo_out.compress().to_bytes());
to_hash.extend(msg);
// Configure the loop based on if we're signing or verifying
let start;
let end;
let mut c;
match A_c1 {
Mode::Sign(r, A, AH) => {
start = r + 1;
end = r + n;
to_hash.extend(A.compress().to_bytes());
to_hash.extend(AH.compress().to_bytes());
c = hash_to_scalar(&to_hash);
}
Mode::Verify(c1) => {
start = 0;
end = n;
c = *c1;
}
}
// Perform the core loop
let mut c1 = CtOption::new(Scalar::ZERO, Choice::from(0));
for i in (start .. end).map(|i| i % n) {
// This will only execute once and shouldn't need to be constant time. Making it constant time
// removes the risk of branch prediction creating timing differences depending on ring index
// however
c1 = c1.or_else(|| CtOption::new(c, i.ct_eq(&0)));
let c_p = mu_P * c;
let c_c = mu_C * c;
let L = (&s[i] * ED25519_BASEPOINT_TABLE) + (c_p * P[i]) + (c_c * C[i]);
let PH = hash_to_point(&P[i]);
// Shouldn't be an issue as all of the variables in this vartime statement are public
let R = (s[i] * PH) + images_precomp.vartime_multiscalar_mul([c_p, c_c]);
to_hash.truncate(((2 * n) + 3) * 32);
to_hash.extend(L.compress().to_bytes());
to_hash.extend(R.compress().to_bytes());
c = hash_to_scalar(&to_hash);
}
// This first tuple is needed to continue signing, the latter is the c to be tested/worked with
((D, c * mu_P, c * mu_C), c1.unwrap_or(c))
}
/// CLSAG signature, as used in Monero.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Clsag {
pub D: EdwardsPoint,
pub s: Vec<Scalar>,
pub c1: Scalar,
}
impl Clsag {
// Sign core is the extension of core as needed for signing, yet is shared between single signer
// and multisig, hence why it's still core
pub(crate) fn sign_core<R: RngCore + CryptoRng>(
rng: &mut R,
I: &EdwardsPoint,
input: &ClsagInput,
mask: Scalar,
msg: &[u8; 32],
A: EdwardsPoint,
AH: EdwardsPoint,
) -> (Clsag, EdwardsPoint, Scalar, Scalar) {
let r: usize = input.decoys.i.into();
let pseudo_out = Commitment::new(mask, input.commitment.amount).calculate();
let z = input.commitment.mask - mask;
let H = hash_to_point(&input.decoys.ring[r][0]);
let D = H * z;
let mut s = Vec::with_capacity(input.decoys.ring.len());
for _ in 0 .. input.decoys.ring.len() {
s.push(random_scalar(rng));
}
let ((D, p, c), c1) =
core(&input.decoys.ring, I, &pseudo_out, msg, &D, &s, &Mode::Sign(r, A, AH));
(Clsag { D, s, c1 }, pseudo_out, p, c * z)
}
/// Generate CLSAG signatures for the given inputs.
/// inputs is of the form (private key, key image, input).
/// sum_outputs is for the sum of the outputs' commitment masks.
pub fn sign<R: RngCore + CryptoRng>(
rng: &mut R,
mut inputs: Vec<(Zeroizing<Scalar>, EdwardsPoint, ClsagInput)>,
sum_outputs: Scalar,
msg: [u8; 32],
) -> Vec<(Clsag, EdwardsPoint)> {
let mut res = Vec::with_capacity(inputs.len());
let mut sum_pseudo_outs = Scalar::ZERO;
for i in 0 .. inputs.len() {
let mut mask = random_scalar(rng);
if i == (inputs.len() - 1) {
mask = sum_outputs - sum_pseudo_outs;
} else {
sum_pseudo_outs += mask;
}
let mut nonce = Zeroizing::new(random_scalar(rng));
let (mut clsag, pseudo_out, p, c) = Clsag::sign_core(
rng,
&inputs[i].1,
&inputs[i].2,
mask,
&msg,
nonce.deref() * ED25519_BASEPOINT_TABLE,
nonce.deref() *
hash_to_point(&inputs[i].2.decoys.ring[usize::from(inputs[i].2.decoys.i)][0]),
);
clsag.s[usize::from(inputs[i].2.decoys.i)] =
(-((p * inputs[i].0.deref()) + c)) + nonce.deref();
inputs[i].0.zeroize();
nonce.zeroize();
debug_assert!(clsag
.verify(&inputs[i].2.decoys.ring, &inputs[i].1, &pseudo_out, &msg)
.is_ok());
res.push((clsag, pseudo_out));
}
res
}
/// Verify the CLSAG signature against the given Transaction data.
pub fn verify(
&self,
ring: &[[EdwardsPoint; 2]],
I: &EdwardsPoint,
pseudo_out: &EdwardsPoint,
msg: &[u8; 32],
) -> Result<(), ClsagError> {
// Preliminary checks. s, c1, and points must also be encoded canonically, which isn't checked
// here
if ring.is_empty() {
Err(ClsagError::InvalidRing)?;
}
if ring.len() != self.s.len() {
Err(ClsagError::InvalidS)?;
}
if I.is_identity() {
Err(ClsagError::InvalidImage)?;
}
let D = self.D.mul_by_cofactor();
if D.is_identity() {
Err(ClsagError::InvalidD)?;
}
let (_, c1) = core(ring, I, pseudo_out, msg, &D, &self.s, &Mode::Verify(self.c1));
if c1 != self.c1 {
Err(ClsagError::InvalidC1)?;
}
Ok(())
}
pub(crate) fn fee_weight(ring_len: usize) -> usize {
(ring_len * 32) + 32 + 32
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_raw_vec(write_scalar, &self.s, w)?;
w.write_all(&self.c1.to_bytes())?;
write_point(&self.D, w)
}
pub fn read<R: Read>(decoys: usize, r: &mut R) -> io::Result<Clsag> {
Ok(Clsag { s: read_raw_vec(read_scalar, decoys, r)?, c1: read_scalar(r)?, D: read_point(r)? })
}
}

View File

@@ -1,304 +0,0 @@
use core::{ops::Deref, fmt::Debug};
use std_shims::io::{self, Read, Write};
use std::sync::{Arc, RwLock};
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
use group::{ff::Field, Group, GroupEncoding};
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group as dfg;
use dleq::DLEqProof;
use frost::{
dkg::lagrange,
curve::Ed25519,
Participant, FrostError, ThresholdKeys, ThresholdView,
algorithm::{WriteAddendum, Algorithm},
};
use crate::ringct::{
hash_to_point,
clsag::{ClsagInput, Clsag},
};
fn dleq_transcript() -> RecommendedTranscript {
RecommendedTranscript::new(b"monero_key_image_dleq")
}
impl ClsagInput {
fn transcript<T: Transcript>(&self, transcript: &mut T) {
// Doesn't domain separate as this is considered part of the larger CLSAG proof
// Ring index
transcript.append_message(b"real_spend", [self.decoys.i]);
// Ring
for (i, pair) in self.decoys.ring.iter().enumerate() {
// Doesn't include global output indexes as CLSAG doesn't care and won't be affected by it
// They're just a unreliable reference to this data which will be included in the message
// if in use
transcript.append_message(b"member", [u8::try_from(i).expect("ring size exceeded 255")]);
transcript.append_message(b"key", pair[0].compress().to_bytes());
transcript.append_message(b"commitment", pair[1].compress().to_bytes())
}
// Doesn't include the commitment's parts as the above ring + index includes the commitment
// The only potential malleability would be if the G/H relationship is known breaking the
// discrete log problem, which breaks everything already
}
}
/// CLSAG input and the mask to use for it.
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ClsagDetails {
input: ClsagInput,
mask: Scalar,
}
impl ClsagDetails {
pub fn new(input: ClsagInput, mask: Scalar) -> ClsagDetails {
ClsagDetails { input, mask }
}
}
/// Addendum produced during the FROST signing process with relevant data.
#[derive(Clone, PartialEq, Eq, Zeroize, Debug)]
pub struct ClsagAddendum {
pub(crate) key_image: dfg::EdwardsPoint,
dleq: DLEqProof<dfg::EdwardsPoint>,
}
impl WriteAddendum for ClsagAddendum {
fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(self.key_image.compress().to_bytes().as_ref())?;
self.dleq.write(writer)
}
}
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug)]
struct Interim {
p: Scalar,
c: Scalar,
clsag: Clsag,
pseudo_out: EdwardsPoint,
}
/// FROST algorithm for producing a CLSAG signature.
#[allow(non_snake_case)]
#[derive(Clone, Debug)]
pub struct ClsagMultisig {
transcript: RecommendedTranscript,
pub(crate) H: EdwardsPoint,
// Merged here as CLSAG needs it, passing it would be a mess, yet having it beforehand requires
// an extra round
image: EdwardsPoint,
details: Arc<RwLock<Option<ClsagDetails>>>,
msg: Option<[u8; 32]>,
interim: Option<Interim>,
}
impl ClsagMultisig {
pub fn new(
transcript: RecommendedTranscript,
output_key: EdwardsPoint,
details: Arc<RwLock<Option<ClsagDetails>>>,
) -> ClsagMultisig {
ClsagMultisig {
transcript,
H: hash_to_point(&output_key),
image: EdwardsPoint::identity(),
details,
msg: None,
interim: None,
}
}
fn input(&self) -> ClsagInput {
(*self.details.read().unwrap()).as_ref().unwrap().input.clone()
}
fn mask(&self) -> Scalar {
(*self.details.read().unwrap()).as_ref().unwrap().mask
}
}
pub(crate) fn add_key_image_share(
image: &mut EdwardsPoint,
generator: EdwardsPoint,
offset: Scalar,
included: &[Participant],
participant: Participant,
share: EdwardsPoint,
) {
if image.is_identity().into() {
*image = generator * offset;
}
*image += share * lagrange::<dfg::Scalar>(participant, included).0;
}
impl Algorithm<Ed25519> for ClsagMultisig {
type Transcript = RecommendedTranscript;
type Addendum = ClsagAddendum;
type Signature = (Clsag, EdwardsPoint);
fn nonces(&self) -> Vec<Vec<dfg::EdwardsPoint>> {
vec![vec![dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)]]
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Ed25519>,
) -> ClsagAddendum {
ClsagAddendum {
key_image: dfg::EdwardsPoint(self.H) * keys.secret_share().deref(),
dleq: DLEqProof::prove(
rng,
// Doesn't take in a larger transcript object due to the usage of this
// Every prover would immediately write their own DLEq proof, when they can only do so in
// the proper order if they want to reach consensus
// It'd be a poor API to have CLSAG define a new transcript solely to pass here, just to
// try to merge later in some form, when it should instead just merge xH (as it does)
&mut dleq_transcript(),
&[dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)],
keys.secret_share(),
),
}
}
fn read_addendum<R: Read>(&self, reader: &mut R) -> io::Result<ClsagAddendum> {
let mut bytes = [0; 32];
reader.read_exact(&mut bytes)?;
// dfg ensures the point is torsion free
let xH = Option::<dfg::EdwardsPoint>::from(dfg::EdwardsPoint::from_bytes(&bytes))
.ok_or_else(|| io::Error::other("invalid key image"))?;
// Ensure this is a canonical point
if xH.to_bytes() != bytes {
Err(io::Error::other("non-canonical key image"))?;
}
Ok(ClsagAddendum { key_image: xH, dleq: DLEqProof::<dfg::EdwardsPoint>::read(reader)? })
}
fn process_addendum(
&mut self,
view: &ThresholdView<Ed25519>,
l: Participant,
addendum: ClsagAddendum,
) -> Result<(), FrostError> {
if self.image.is_identity().into() {
self.transcript.domain_separate(b"CLSAG");
self.input().transcript(&mut self.transcript);
self.transcript.append_message(b"mask", self.mask().to_bytes());
}
self.transcript.append_message(b"participant", l.to_bytes());
addendum
.dleq
.verify(
&mut dleq_transcript(),
&[dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)],
&[view.original_verification_share(l), addendum.key_image],
)
.map_err(|_| FrostError::InvalidPreprocess(l))?;
self.transcript.append_message(b"key_image_share", addendum.key_image.compress().to_bytes());
add_key_image_share(
&mut self.image,
self.H,
view.offset().0,
view.included(),
l,
addendum.key_image.0,
);
Ok(())
}
fn transcript(&mut self) -> &mut Self::Transcript {
&mut self.transcript
}
fn sign_share(
&mut self,
view: &ThresholdView<Ed25519>,
nonce_sums: &[Vec<dfg::EdwardsPoint>],
nonces: Vec<Zeroizing<dfg::Scalar>>,
msg: &[u8],
) -> dfg::Scalar {
// Use the transcript to get a seeded random number generator
// The transcript contains private data, preventing passive adversaries from recreating this
// process even if they have access to commitments (specifically, the ring index being signed
// for, along with the mask which should not only require knowing the shared keys yet also the
// input commitment masks)
let mut rng = ChaCha20Rng::from_seed(self.transcript.rng_seed(b"decoy_responses"));
self.msg = Some(msg.try_into().expect("CLSAG message should be 32-bytes"));
#[allow(non_snake_case)]
let (clsag, pseudo_out, p, c) = Clsag::sign_core(
&mut rng,
&self.image,
&self.input(),
self.mask(),
self.msg.as_ref().unwrap(),
nonce_sums[0][0].0,
nonce_sums[0][1].0,
);
self.interim = Some(Interim { p, c, clsag, pseudo_out });
(-(dfg::Scalar(p) * view.secret_share().deref())) + nonces[0].deref()
}
#[must_use]
fn verify(
&self,
_: dfg::EdwardsPoint,
_: &[Vec<dfg::EdwardsPoint>],
sum: dfg::Scalar,
) -> Option<Self::Signature> {
let interim = self.interim.as_ref().unwrap();
let mut clsag = interim.clsag.clone();
clsag.s[usize::from(self.input().decoys.i)] = sum.0 - interim.c;
if clsag
.verify(
&self.input().decoys.ring,
&self.image,
&interim.pseudo_out,
self.msg.as_ref().unwrap(),
)
.is_ok()
{
return Some((clsag, interim.pseudo_out));
}
None
}
fn verify_share(
&self,
verification_share: dfg::EdwardsPoint,
nonces: &[Vec<dfg::EdwardsPoint>],
share: dfg::Scalar,
) -> Result<Vec<(dfg::Scalar, dfg::EdwardsPoint)>, ()> {
let interim = self.interim.as_ref().unwrap();
Ok(vec![
(share, dfg::EdwardsPoint::generator()),
(dfg::Scalar(interim.p), verification_share),
(-dfg::Scalar::ONE, nonces[0][0]),
])
}
}

View File

@@ -1,8 +0,0 @@
use curve25519_dalek::edwards::EdwardsPoint;
pub use monero_generators::{hash_to_point as raw_hash_to_point};
/// Monero's hash to point function, as named `ge_fromfe_frombytes_vartime`.
pub fn hash_to_point(key: &EdwardsPoint) -> EdwardsPoint {
raw_hash_to_point(key.compress().to_bytes())
}

View File

@@ -1,400 +0,0 @@
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub(crate) mod hash_to_point;
pub use hash_to_point::{raw_hash_to_point, hash_to_point};
/// MLSAG struct, along with verifying functionality.
pub mod mlsag;
/// CLSAG struct, along with signing and verifying functionality.
pub mod clsag;
/// BorromeanRange struct, along with verifying functionality.
pub mod borromean;
/// Bulletproofs(+) structs, along with proving and verifying functionality.
pub mod bulletproofs;
use crate::{
Protocol,
serialize::*,
ringct::{mlsag::Mlsag, clsag::Clsag, borromean::BorromeanRange, bulletproofs::Bulletproofs},
};
/// Generate a key image for a given key. Defined as `x * hash_to_point(xG)`.
pub fn generate_key_image(secret: &Zeroizing<Scalar>) -> EdwardsPoint {
hash_to_point(&(ED25519_BASEPOINT_TABLE * secret.deref())) * secret.deref()
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum EncryptedAmount {
Original { mask: [u8; 32], amount: [u8; 32] },
Compact { amount: [u8; 8] },
}
impl EncryptedAmount {
pub fn read<R: Read>(compact: bool, r: &mut R) -> io::Result<EncryptedAmount> {
Ok(if !compact {
EncryptedAmount::Original { mask: read_bytes(r)?, amount: read_bytes(r)? }
} else {
EncryptedAmount::Compact { amount: read_bytes(r)? }
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
EncryptedAmount::Original { mask, amount } => {
w.write_all(mask)?;
w.write_all(amount)
}
EncryptedAmount::Compact { amount } => w.write_all(amount),
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum RctType {
/// No RCT proofs.
Null,
/// One MLSAG for multiple inputs and Borromean range proofs (RCTTypeFull).
MlsagAggregate,
// One MLSAG for each input and a Borromean range proof (RCTTypeSimple).
MlsagIndividual,
// One MLSAG for each input and a Bulletproof (RCTTypeBulletproof).
Bulletproofs,
/// One MLSAG for each input and a Bulletproof, yet starting to use EncryptedAmount::Compact
/// (RCTTypeBulletproof2).
BulletproofsCompactAmount,
/// One CLSAG for each input and a Bulletproof (RCTTypeCLSAG).
Clsag,
/// One CLSAG for each input and a Bulletproof+ (RCTTypeBulletproofPlus).
BulletproofsPlus,
}
impl RctType {
pub fn to_byte(self) -> u8 {
match self {
RctType::Null => 0,
RctType::MlsagAggregate => 1,
RctType::MlsagIndividual => 2,
RctType::Bulletproofs => 3,
RctType::BulletproofsCompactAmount => 4,
RctType::Clsag => 5,
RctType::BulletproofsPlus => 6,
}
}
pub fn from_byte(byte: u8) -> Option<Self> {
Some(match byte {
0 => RctType::Null,
1 => RctType::MlsagAggregate,
2 => RctType::MlsagIndividual,
3 => RctType::Bulletproofs,
4 => RctType::BulletproofsCompactAmount,
5 => RctType::Clsag,
6 => RctType::BulletproofsPlus,
_ => None?,
})
}
pub fn compact_encrypted_amounts(&self) -> bool {
match self {
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::Bulletproofs => false,
RctType::BulletproofsCompactAmount | RctType::Clsag | RctType::BulletproofsPlus => true,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctBase {
pub fee: u64,
pub pseudo_outs: Vec<EdwardsPoint>,
pub encrypted_amounts: Vec<EncryptedAmount>,
pub commitments: Vec<EdwardsPoint>,
}
impl RctBase {
pub(crate) fn fee_weight(outputs: usize, fee: u64) -> usize {
// 1 byte for the RCT signature type
1 + (outputs * (8 + 32)) + varint_len(fee)
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
w.write_all(&[rct_type.to_byte()])?;
match rct_type {
RctType::Null => Ok(()),
_ => {
write_varint(&self.fee, w)?;
if rct_type == RctType::MlsagIndividual {
write_raw_vec(write_point, &self.pseudo_outs, w)?;
}
for encrypted_amount in &self.encrypted_amounts {
encrypted_amount.write(w)?;
}
write_raw_vec(write_point, &self.commitments, w)
}
}
}
pub fn read<R: Read>(inputs: usize, outputs: usize, r: &mut R) -> io::Result<(RctBase, RctType)> {
let rct_type =
RctType::from_byte(read_byte(r)?).ok_or_else(|| io::Error::other("invalid RCT type"))?;
match rct_type {
RctType::Null | RctType::MlsagAggregate | RctType::MlsagIndividual => {}
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag |
RctType::BulletproofsPlus => {
if outputs == 0 {
// Because the Bulletproofs(+) layout must be canonical, there must be 1 Bulletproof if
// Bulletproofs are in use
// If there are Bulletproofs, there must be a matching amount of outputs, implicitly
// banning 0 outputs
// Since HF 12 (CLSAG being 13), a 2-output minimum has also been enforced
Err(io::Error::other("RCT with Bulletproofs(+) had 0 outputs"))?;
}
}
}
Ok((
if rct_type == RctType::Null {
RctBase { fee: 0, pseudo_outs: vec![], encrypted_amounts: vec![], commitments: vec![] }
} else {
RctBase {
fee: read_varint(r)?,
pseudo_outs: if rct_type == RctType::MlsagIndividual {
read_raw_vec(read_point, inputs, r)?
} else {
vec![]
},
encrypted_amounts: (0 .. outputs)
.map(|_| EncryptedAmount::read(rct_type.compact_encrypted_amounts(), r))
.collect::<Result<_, _>>()?,
commitments: read_raw_vec(read_point, outputs, r)?,
}
},
rct_type,
))
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum RctPrunable {
Null,
AggregateMlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsag: Mlsag,
},
MlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsags: Vec<Mlsag>,
},
MlsagBulletproofs {
bulletproofs: Bulletproofs,
mlsags: Vec<Mlsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
Clsag {
bulletproofs: Bulletproofs,
clsags: Vec<Clsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
}
impl RctPrunable {
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
// 1 byte for number of BPs (technically a VarInt, yet there's always just zero or one)
1 + Bulletproofs::fee_weight(protocol.bp_plus(), outputs) +
(inputs * (Clsag::fee_weight(protocol.ring_len()) + 32))
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
match self {
RctPrunable::Null => Ok(()),
RctPrunable::AggregateMlsagBorromean { borromean, mlsag } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
mlsag.write(w)
}
RctPrunable::MlsagBorromean { borromean, mlsags } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
write_raw_vec(Mlsag::write, mlsags, w)
}
RctPrunable::MlsagBulletproofs { bulletproofs, mlsags, pseudo_outs } => {
if rct_type == RctType::Bulletproofs {
w.write_all(&1u32.to_le_bytes())?;
} else {
w.write_all(&[1])?;
}
bulletproofs.write(w)?;
write_raw_vec(Mlsag::write, mlsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs } => {
w.write_all(&[1])?;
bulletproofs.write(w)?;
write_raw_vec(Clsag::write, clsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
}
}
pub fn serialize(&self, rct_type: RctType) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized, rct_type).unwrap();
serialized
}
pub fn read<R: Read>(
rct_type: RctType,
ring_length: usize,
inputs: usize,
outputs: usize,
r: &mut R,
) -> io::Result<RctPrunable> {
// While we generally don't bother with misc consensus checks, this affects the safety of
// the below defined rct_type function
// The exact line preventing zero-input transactions is:
// https://github.com/monero-project/monero/blob/00fd416a99686f0956361d1cd0337fe56e58d4a7/
// src/ringct/rctSigs.cpp#L609
// And then for RctNull, that's only allowed for miner TXs which require one input of
// Input::Gen
if inputs == 0 {
Err(io::Error::other("transaction had no inputs"))?;
}
Ok(match rct_type {
RctType::Null => RctPrunable::Null,
RctType::MlsagAggregate => RctPrunable::AggregateMlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsag: Mlsag::read(ring_length, inputs + 1, r)?,
},
RctType::MlsagIndividual => RctPrunable::MlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsags: (0 .. inputs).map(|_| Mlsag::read(ring_length, 2, r)).collect::<Result<_, _>>()?,
},
RctType::Bulletproofs | RctType::BulletproofsCompactAmount => {
RctPrunable::MlsagBulletproofs {
bulletproofs: {
if (if rct_type == RctType::Bulletproofs {
u64::from(read_u32(r)?)
} else {
read_varint(r)?
}) != 1
{
Err(io::Error::other("n bulletproofs instead of one"))?;
}
Bulletproofs::read(r)?
},
mlsags: (0 .. inputs)
.map(|_| Mlsag::read(ring_length, 2, r))
.collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, inputs, r)?,
}
}
RctType::Clsag | RctType::BulletproofsPlus => RctPrunable::Clsag {
bulletproofs: {
if read_varint::<_, u64>(r)? != 1 {
Err(io::Error::other("n bulletproofs instead of one"))?;
}
(if rct_type == RctType::Clsag { Bulletproofs::read } else { Bulletproofs::read_plus })(
r,
)?
},
clsags: (0 .. inputs).map(|_| Clsag::read(ring_length, r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, inputs, r)?,
},
})
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
RctPrunable::Null => panic!("Serializing RctPrunable::Null for a signature"),
RctPrunable::AggregateMlsagBorromean { borromean, .. } |
RctPrunable::MlsagBorromean { borromean, .. } => {
borromean.iter().try_for_each(|rs| rs.write(w))
}
RctPrunable::MlsagBulletproofs { bulletproofs, .. } |
RctPrunable::Clsag { bulletproofs, .. } => bulletproofs.signature_write(w),
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctSignatures {
pub base: RctBase,
pub prunable: RctPrunable,
}
impl RctSignatures {
/// RctType for a given RctSignatures struct.
pub fn rct_type(&self) -> RctType {
match &self.prunable {
RctPrunable::Null => RctType::Null,
RctPrunable::AggregateMlsagBorromean { .. } => RctType::MlsagAggregate,
RctPrunable::MlsagBorromean { .. } => RctType::MlsagIndividual,
// RctBase ensures there's at least one output, making the following
// inferences guaranteed/expects impossible on any valid RctSignatures
RctPrunable::MlsagBulletproofs { .. } => {
if matches!(
self
.base
.encrypted_amounts
.first()
.expect("MLSAG with Bulletproofs didn't have any outputs"),
EncryptedAmount::Original { .. }
) {
RctType::Bulletproofs
} else {
RctType::BulletproofsCompactAmount
}
}
RctPrunable::Clsag { bulletproofs, .. } => {
if matches!(bulletproofs, Bulletproofs::Original { .. }) {
RctType::Clsag
} else {
RctType::BulletproofsPlus
}
}
}
}
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize, fee: u64) -> usize {
RctBase::fee_weight(outputs, fee) + RctPrunable::fee_weight(protocol, inputs, outputs)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
let rct_type = self.rct_type();
self.base.write(w, rct_type)?;
self.prunable.write(w, rct_type)
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(
ring_length: usize,
inputs: usize,
outputs: usize,
r: &mut R,
) -> io::Result<RctSignatures> {
let base = RctBase::read(inputs, outputs, r)?;
Ok(RctSignatures {
base: base.0,
prunable: RctPrunable::read(base.1, ring_length, inputs, outputs, r)?,
})
}
}

View File

@@ -1,761 +0,0 @@
use core::fmt::Debug;
#[cfg(not(feature = "std"))]
use alloc::boxed::Box;
use std_shims::{
vec::Vec,
io,
string::{String, ToString},
};
use async_trait::async_trait;
use curve25519_dalek::edwards::EdwardsPoint;
use monero_generators::decompress_point;
use serde::{Serialize, Deserialize, de::DeserializeOwned};
use serde_json::{Value, json};
use crate::{
Protocol,
serialize::*,
transaction::{Input, Timelock, Transaction},
block::Block,
wallet::{FeePriority, Fee},
};
#[cfg(feature = "http-rpc")]
mod http;
#[cfg(feature = "http-rpc")]
pub use http::*;
// Number of blocks the fee estimate will be valid for
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L121
const GRACE_BLOCKS_FOR_FEE_ESTIMATE: u64 = 10;
#[derive(Deserialize, Debug)]
pub struct EmptyResponse {}
#[derive(Deserialize, Debug)]
pub struct JsonRpcResponse<T> {
result: T,
}
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
pruned_as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
#[derive(Deserialize, Debug)]
pub struct OutputResponse {
pub height: usize,
pub unlocked: bool,
key: String,
mask: String,
txid: String,
}
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum RpcError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("connection error ({0})"))]
ConnectionError(String),
#[cfg_attr(feature = "std", error("invalid node ({0})"))]
InvalidNode(String),
#[cfg_attr(feature = "std", error("unsupported protocol version ({0})"))]
UnsupportedProtocol(usize),
#[cfg_attr(feature = "std", error("transactions not found"))]
TransactionsNotFound(Vec<[u8; 32]>),
#[cfg_attr(feature = "std", error("invalid point ({0})"))]
InvalidPoint(String),
#[cfg_attr(feature = "std", error("pruned transaction"))]
PrunedTransaction,
#[cfg_attr(feature = "std", error("invalid transaction ({0:?})"))]
InvalidTransaction([u8; 32]),
#[cfg_attr(feature = "std", error("unexpected fee response"))]
InvalidFee,
#[cfg_attr(feature = "std", error("invalid priority"))]
InvalidPriority,
}
fn rpc_hex(value: &str) -> Result<Vec<u8>, RpcError> {
hex::decode(value).map_err(|_| RpcError::InvalidNode("expected hex wasn't hex".to_string()))
}
fn hash_hex(hash: &str) -> Result<[u8; 32], RpcError> {
rpc_hex(hash)?.try_into().map_err(|_| RpcError::InvalidNode("hash wasn't 32-bytes".to_string()))
}
fn rpc_point(point: &str) -> Result<EdwardsPoint, RpcError> {
decompress_point(
rpc_hex(point)?.try_into().map_err(|_| RpcError::InvalidPoint(point.to_string()))?,
)
.ok_or_else(|| RpcError::InvalidPoint(point.to_string()))
}
// Read an EPEE VarInt, distinct from the VarInts used throughout the rest of the protocol
fn read_epee_vi<R: io::Read>(reader: &mut R) -> io::Result<u64> {
let vi_start = read_byte(reader)?;
let len = match vi_start & 0b11 {
0 => 1,
1 => 2,
2 => 4,
3 => 8,
_ => unreachable!(),
};
let mut vi = u64::from(vi_start >> 2);
for i in 1 .. len {
vi |= u64::from(read_byte(reader)?) << (((i - 1) * 8) + 6);
}
Ok(vi)
}
#[async_trait]
pub trait RpcConnection: Clone + Debug {
/// Perform a POST request to the specified route with the specified body.
///
/// The implementor is left to handle anything such as authentication.
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError>;
}
// TODO: Make this provided methods for RpcConnection?
#[derive(Clone, Debug)]
pub struct Rpc<R: RpcConnection>(R);
impl<R: RpcConnection> Rpc<R> {
/// Perform a RPC call to the specified route with the provided parameters.
///
/// This is NOT a JSON-RPC call. They use a route of "json_rpc" and are available via
/// `json_rpc_call`.
pub async fn rpc_call<Params: Serialize + Debug, Response: DeserializeOwned + Debug>(
&self,
route: &str,
params: Option<Params>,
) -> Result<Response, RpcError> {
let res = self
.0
.post(
route,
if let Some(params) = params {
serde_json::to_string(&params).unwrap().into_bytes()
} else {
vec![]
},
)
.await?;
let res_str = std_shims::str::from_utf8(&res)
.map_err(|_| RpcError::InvalidNode("response wasn't utf-8".to_string()))?;
serde_json::from_str(res_str)
.map_err(|_| RpcError::InvalidNode(format!("response wasn't json: {res_str}")))
}
/// Perform a JSON-RPC call with the specified method with the provided parameters
pub async fn json_rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Value>,
) -> Result<Response, RpcError> {
let mut req = json!({ "method": method });
if let Some(params) = params {
req.as_object_mut().unwrap().insert("params".into(), params);
}
Ok(self.rpc_call::<_, JsonRpcResponse<Response>>("json_rpc", Some(req)).await?.result)
}
/// Perform a binary call to the specified route with the provided parameters.
pub async fn bin_call(&self, route: &str, params: Vec<u8>) -> Result<Vec<u8>, RpcError> {
self.0.post(route, params).await
}
/// Get the active blockchain protocol version.
pub async fn get_protocol(&self) -> Result<Protocol, RpcError> {
#[derive(Deserialize, Debug)]
struct ProtocolResponse {
major_version: usize,
}
#[derive(Deserialize, Debug)]
struct LastHeaderResponse {
block_header: ProtocolResponse,
}
Ok(
match self
.json_rpc_call::<LastHeaderResponse>("get_last_block_header", None)
.await?
.block_header
.major_version
{
13 | 14 => Protocol::v14,
15 | 16 => Protocol::v16,
protocol => Err(RpcError::UnsupportedProtocol(protocol))?,
},
)
}
pub async fn get_height(&self) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct HeightResponse {
height: usize,
}
Ok(self.rpc_call::<Option<()>, HeightResponse>("get_height", None).await?.height)
}
pub async fn get_transactions(&self, hashes: &[[u8; 32]]) -> Result<Vec<Transaction>, RpcError> {
if hashes.is_empty() {
return Ok(vec![]);
}
let mut hashes_hex = hashes.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = Vec::with_capacity(hashes.len());
while !hashes_hex.is_empty() {
// Monero errors if more than 100 is requested unless using a non-restricted RPC
const TXS_PER_REQUEST: usize = 100;
let this_count = TXS_PER_REQUEST.min(hashes_hex.len());
let txs: TransactionsResponse = self
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. this_count).collect::<Vec<_>>(),
})),
)
.await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
all_txs.extend(txs.txs);
}
all_txs
.iter()
.enumerate()
.map(|(i, res)| {
let tx = Transaction::read::<&[u8]>(
&mut rpc_hex(if !res.as_hex.is_empty() { &res.as_hex } else { &res.pruned_as_hex })?
.as_ref(),
)
.map_err(|_| match hash_hex(&res.tx_hash) {
Ok(hash) => RpcError::InvalidTransaction(hash),
Err(err) => err,
})?;
// https://github.com/monero-project/monero/issues/8311
if res.as_hex.is_empty() {
match tx.prefix.inputs.first() {
Some(Input::Gen { .. }) => (),
_ => Err(RpcError::PrunedTransaction)?,
}
}
// This does run a few keccak256 hashes, which is pointless if the node is trusted
// In exchange, this provides resilience against invalid/malicious nodes
if tx.hash() != hashes[i] {
Err(RpcError::InvalidNode(
"replied with transaction wasn't the requested transaction".to_string(),
))?;
}
Ok(tx)
})
.collect()
}
pub async fn get_transaction(&self, tx: [u8; 32]) -> Result<Transaction, RpcError> {
self.get_transactions(&[tx]).await.map(|mut txs| txs.swap_remove(0))
}
/// Get the hash of a block from the node by the block's numbers.
/// This function does not verify the returned block hash is actually for the number in question.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
#[derive(Deserialize, Debug)]
struct BlockHeaderResponse {
hash: String,
}
#[derive(Deserialize, Debug)]
struct BlockHeaderByHeightResponse {
block_header: BlockHeaderResponse,
}
let header: BlockHeaderByHeightResponse =
self.json_rpc_call("get_block_header_by_height", Some(json!({ "height": number }))).await?;
hash_hex(&header.block_header.hash)
}
/// Get a block from the node by its hash.
/// This function does not verify the returned block actually has the hash in question.
pub async fn get_block(&self, hash: [u8; 32]) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await?;
let block = Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref())
.map_err(|_| RpcError::InvalidNode("invalid block".to_string()))?;
if block.hash() != hash {
Err(RpcError::InvalidNode("different block than requested (hash)".to_string()))?;
}
Ok(block)
}
pub async fn get_block_by_number(&self, number: usize) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "height": number }))).await?;
let block = Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref())
.map_err(|_| RpcError::InvalidNode("invalid block".to_string()))?;
// Make sure this is actually the block for this number
match block.miner_tx.prefix.inputs.first() {
Some(Input::Gen(actual)) => {
if usize::try_from(*actual).unwrap() == number {
Ok(block)
} else {
Err(RpcError::InvalidNode("different block than requested (number)".to_string()))
}
}
_ => Err(RpcError::InvalidNode(
"block's miner_tx didn't have an input of kind Input::Gen".to_string(),
)),
}
}
pub async fn get_block_transactions(&self, hash: [u8; 32]) -> Result<Vec<Transaction>, RpcError> {
let block = self.get_block(hash).await?;
let mut res = vec![block.miner_tx];
res.extend(self.get_transactions(&block.txs).await?);
Ok(res)
}
pub async fn get_block_transactions_by_number(
&self,
number: usize,
) -> Result<Vec<Transaction>, RpcError> {
self.get_block_transactions(self.get_block_hash(number).await?).await
}
/// Get the output indexes of the specified transaction.
pub async fn get_o_indexes(&self, hash: [u8; 32]) -> Result<Vec<u64>, RpcError> {
/*
TODO: Use these when a suitable epee serde lib exists
#[derive(Serialize, Debug)]
struct Request {
txid: [u8; 32],
}
#[derive(Deserialize, Debug)]
struct OIndexes {
o_indexes: Vec<u64>,
}
*/
// Given the immaturity of Rust epee libraries, this is a homegrown one which is only validated
// to work against this specific function
// Header for EPEE, an 8-byte magic and a version
const EPEE_HEADER: &[u8] = b"\x01\x11\x01\x01\x01\x01\x02\x01\x01";
let mut request = EPEE_HEADER.to_vec();
// Number of fields (shifted over 2 bits as the 2 LSBs are reserved for metadata)
request.push(1 << 2);
// Length of field name
request.push(4);
// Field name
request.extend(b"txid");
// Type of field
request.push(10);
// Length of string, since this byte array is technically a string
request.push(32 << 2);
// The "string"
request.extend(hash);
let indexes_buf = self.bin_call("get_o_indexes.bin", request).await?;
let mut indexes: &[u8] = indexes_buf.as_ref();
(|| {
let mut res = None;
let mut is_okay = false;
if read_bytes::<_, { EPEE_HEADER.len() }>(&mut indexes)? != EPEE_HEADER {
Err(io::Error::other("invalid header"))?;
}
let read_object = |reader: &mut &[u8]| -> io::Result<Vec<u64>> {
let fields = read_byte(reader)? >> 2;
for _ in 0 .. fields {
let name_len = read_byte(reader)?;
let name = read_raw_vec(read_byte, name_len.into(), reader)?;
let type_with_array_flag = read_byte(reader)?;
let kind = type_with_array_flag & (!0x80);
let iters = if type_with_array_flag != kind { read_epee_vi(reader)? } else { 1 };
if (&name == b"o_indexes") && (kind != 5) {
Err(io::Error::other("o_indexes weren't u64s"))?;
}
let f = match kind {
// i64
1 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// i32
2 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// i16
3 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// i8
4 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// u64
5 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// u32
6 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// u16
7 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// u8
8 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// double
9 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// string, or any collection of bytes
10 => |reader: &mut &[u8]| {
let len = read_epee_vi(reader)?;
read_raw_vec(
read_byte,
len.try_into().map_err(|_| io::Error::other("u64 length exceeded usize"))?,
reader,
)
},
// bool
11 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// object, errors here as it shouldn't be used on this call
12 => {
|_: &mut &[u8]| Err(io::Error::other("node used object in reply to get_o_indexes"))
}
// array, so far unused
13 => |_: &mut &[u8]| Err(io::Error::other("node used the unused array type")),
_ => |_: &mut &[u8]| Err(io::Error::other("node used an invalid type")),
};
let mut bytes_res = vec![];
for _ in 0 .. iters {
bytes_res.push(f(reader)?);
}
let mut actual_res = Vec::with_capacity(bytes_res.len());
match name.as_slice() {
b"o_indexes" => {
for o_index in bytes_res {
actual_res.push(u64::from_le_bytes(
o_index
.try_into()
.map_err(|_| io::Error::other("node didn't provide 8 bytes for a u64"))?,
));
}
res = Some(actual_res);
}
b"status" => {
if bytes_res
.first()
.ok_or_else(|| io::Error::other("status wasn't a string"))?
.as_slice() !=
b"OK"
{
// TODO: Better handle non-OK responses
Err(io::Error::other("response wasn't OK"))?;
}
is_okay = true;
}
_ => continue,
}
if is_okay && res.is_some() {
break;
}
}
// Didn't return a response with a status
// (if the status wasn't okay, we would've already errored)
if !is_okay {
Err(io::Error::other("response didn't contain a status"))?;
}
// If the Vec was empty, it would've been omitted, hence the unwrap_or
// TODO: Test against a 0-output TX, such as the ones found in block 202612
Ok(res.unwrap_or(vec![]))
};
read_object(&mut indexes)
})()
.map_err(|_| RpcError::InvalidNode("invalid binary response".to_string()))
}
/// Get the output distribution, from the specified height to the specified height (both
/// inclusive).
pub async fn get_output_distribution(
&self,
from: usize,
to: usize,
) -> Result<Vec<u64>, RpcError> {
#[derive(Deserialize, Debug)]
struct Distribution {
distribution: Vec<u64>,
}
#[derive(Deserialize, Debug)]
struct Distributions {
distributions: Vec<Distribution>,
}
let mut distributions: Distributions = self
.json_rpc_call(
"get_output_distribution",
Some(json!({
"binary": false,
"amounts": [0],
"cumulative": true,
"from_height": from,
"to_height": to,
})),
)
.await?;
Ok(distributions.distributions.swap_remove(0).distribution)
}
/// Get the specified outputs from the RingCT (zero-amount) pool
pub async fn get_outs(&self, indexes: &[u64]) -> Result<Vec<OutputResponse>, RpcError> {
#[derive(Deserialize, Debug)]
struct OutsResponse {
status: String,
outs: Vec<OutputResponse>,
}
let res: OutsResponse = self
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": 0,
"index": o
})).collect::<Vec<_>>()
})),
)
.await?;
if res.status != "OK" {
Err(RpcError::InvalidNode("bad response to get_outs".to_string()))?;
}
Ok(res.outs)
}
/// Get the specified outputs from the RingCT (zero-amount) pool, but only return them if their
/// timelock has been satisfied.
///
/// The timelock being satisfied is distinct from being free of the 10-block lock applied to all
/// Monero transactions.
pub async fn get_unlocked_outputs(
&self,
indexes: &[u64],
height: usize,
fingerprintable_canonical: bool,
) -> Result<Vec<Option<[EdwardsPoint; 2]>>, RpcError> {
let outs: Vec<OutputResponse> = self.get_outs(indexes).await?;
// Only need to fetch txs to do canonical check on timelock
let txs = if fingerprintable_canonical {
self
.get_transactions(
&outs.iter().map(|out| hash_hex(&out.txid)).collect::<Result<Vec<_>, _>>()?,
)
.await?
} else {
Vec::new()
};
// TODO: https://github.com/serai-dex/serai/issues/104
outs
.iter()
.enumerate()
.map(|(i, out)| {
// Allow keys to be invalid, though if they are, return None to trigger selection of a new
// decoy
// Only valid keys can be used in CLSAG proofs, hence the need for re-selection, yet
// invalid keys may honestly exist on the blockchain
// Only a recent hard fork checked output keys were valid points
let Some(key) = decompress_point(
rpc_hex(&out.key)?
.try_into()
.map_err(|_| RpcError::InvalidNode("non-32-byte point".to_string()))?,
) else {
return Ok(None);
};
Ok(Some([key, rpc_point(&out.mask)?]).filter(|_| {
if fingerprintable_canonical {
Timelock::Block(height) >= txs[i].prefix.timelock
} else {
out.unlocked
}
}))
})
.collect()
}
async fn get_fee_v14(&self, priority: FeePriority) -> Result<Fee, RpcError> {
#[derive(Deserialize, Debug)]
struct FeeResponseV14 {
status: String,
fee: u64,
quantization_mask: u64,
}
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7569-L7584
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7660-L7661
let priority_idx =
usize::try_from(if priority.fee_priority() == 0 { 1 } else { priority.fee_priority() - 1 })
.map_err(|_| RpcError::InvalidPriority)?;
let multipliers = [1, 5, 25, 1000];
if priority_idx >= multipliers.len() {
// though not an RPC error, it seems sensible to treat as such
Err(RpcError::InvalidPriority)?;
}
let fee_multiplier = multipliers[priority_idx];
let res: FeeResponseV14 = self
.json_rpc_call(
"get_fee_estimate",
Some(json!({ "grace_blocks": GRACE_BLOCKS_FOR_FEE_ESTIMATE })),
)
.await?;
if res.status != "OK" {
Err(RpcError::InvalidFee)?;
}
Ok(Fee { per_weight: res.fee * fee_multiplier, mask: res.quantization_mask })
}
/// Get the currently estimated fee from the node.
///
/// This may be manipulated to unsafe levels and MUST be sanity checked.
// TODO: Take a sanity check argument
pub async fn get_fee(&self, protocol: Protocol, priority: FeePriority) -> Result<Fee, RpcError> {
// TODO: Implement wallet2's adjust_priority which by default automatically uses a lower
// priority than provided depending on the backlog in the pool
if protocol.v16_fee() {
#[derive(Deserialize, Debug)]
struct FeeResponse {
status: String,
fees: Vec<u64>,
quantization_mask: u64,
}
let res: FeeResponse = self
.json_rpc_call(
"get_fee_estimate",
Some(json!({ "grace_blocks": GRACE_BLOCKS_FOR_FEE_ESTIMATE })),
)
.await?;
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7615-L7620
let priority_idx = usize::try_from(if priority.fee_priority() >= 4 {
3
} else {
priority.fee_priority().saturating_sub(1)
})
.map_err(|_| RpcError::InvalidPriority)?;
if res.status != "OK" {
Err(RpcError::InvalidFee)
} else if priority_idx >= res.fees.len() {
Err(RpcError::InvalidPriority)
} else {
Ok(Fee { per_weight: res.fees[priority_idx], mask: res.quantization_mask })
}
} else {
self.get_fee_v14(priority).await
}
}
pub async fn publish_transaction(&self, tx: &Transaction) -> Result<(), RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SendRawResponse {
status: String,
double_spend: bool,
fee_too_low: bool,
invalid_input: bool,
invalid_output: bool,
low_mixin: bool,
not_relayed: bool,
overspend: bool,
too_big: bool,
too_few_outputs: bool,
reason: String,
}
let res: SendRawResponse = self
.rpc_call("send_raw_transaction", Some(json!({ "tx_as_hex": hex::encode(tx.serialize()) })))
.await?;
if res.status != "OK" {
Err(RpcError::InvalidTransaction(tx.hash()))?;
}
Ok(())
}
// TODO: Take &Address, not &str?
pub async fn generate_blocks(
&self,
address: &str,
block_count: usize,
) -> Result<(Vec<[u8; 32]>, usize), RpcError> {
#[derive(Debug, Deserialize)]
struct BlocksResponse {
blocks: Vec<String>,
height: usize,
}
let res = self
.json_rpc_call::<BlocksResponse>(
"generateblocks",
Some(json!({
"wallet_address": address,
"amount_of_blocks": block_count
})),
)
.await?;
let mut blocks = Vec::with_capacity(res.blocks.len());
for block in res.blocks {
blocks.push(hash_hex(&block)?);
}
Ok((blocks, res.height))
}
}

View File

@@ -1,172 +0,0 @@
use core::fmt::Debug;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
use monero_generators::decompress_point;
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
mod sealed {
pub trait VarInt: TryInto<u64> + TryFrom<u64> + Copy {
const BITS: usize;
}
impl VarInt for u8 {
const BITS: usize = 8;
}
impl VarInt for u32 {
const BITS: usize = 32;
}
impl VarInt for u64 {
const BITS: usize = 64;
}
impl VarInt for usize {
const BITS: usize = core::mem::size_of::<usize>() * 8;
}
}
// This will panic if the VarInt exceeds u64::MAX
pub(crate) fn varint_len<U: sealed::VarInt>(varint: U) -> usize {
let varint_u64: u64 = varint.try_into().map_err(|_| "varint exceeded u64").unwrap();
((usize::try_from(u64::BITS - varint_u64.leading_zeros()).unwrap().saturating_sub(1)) / 7) + 1
}
pub(crate) fn write_byte<W: Write>(byte: &u8, w: &mut W) -> io::Result<()> {
w.write_all(&[*byte])
}
// This will panic if the VarInt exceeds u64::MAX
pub(crate) fn write_varint<W: Write, U: sealed::VarInt>(varint: &U, w: &mut W) -> io::Result<()> {
let mut varint: u64 = (*varint).try_into().map_err(|_| "varint exceeded u64").unwrap();
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
if varint != 0 {
b |= VARINT_CONTINUATION_MASK;
}
write_byte(&b, w)?;
varint != 0
} {}
Ok(())
}
pub(crate) fn write_scalar<W: Write>(scalar: &Scalar, w: &mut W) -> io::Result<()> {
w.write_all(&scalar.to_bytes())
}
pub(crate) fn write_point<W: Write>(point: &EdwardsPoint, w: &mut W) -> io::Result<()> {
w.write_all(&point.compress().to_bytes())
}
pub(crate) fn write_raw_vec<T, W: Write, F: Fn(&T, &mut W) -> io::Result<()>>(
f: F,
values: &[T],
w: &mut W,
) -> io::Result<()> {
for value in values {
f(value, w)?;
}
Ok(())
}
pub(crate) fn write_vec<T, W: Write, F: Fn(&T, &mut W) -> io::Result<()>>(
f: F,
values: &[T],
w: &mut W,
) -> io::Result<()> {
write_varint(&values.len(), w)?;
write_raw_vec(f, values, w)
}
pub(crate) fn read_bytes<R: Read, const N: usize>(r: &mut R) -> io::Result<[u8; N]> {
let mut res = [0; N];
r.read_exact(&mut res)?;
Ok(res)
}
pub(crate) fn read_byte<R: Read>(r: &mut R) -> io::Result<u8> {
Ok(read_bytes::<_, 1>(r)?[0])
}
pub(crate) fn read_u16<R: Read>(r: &mut R) -> io::Result<u16> {
read_bytes(r).map(u16::from_le_bytes)
}
pub(crate) fn read_u32<R: Read>(r: &mut R) -> io::Result<u32> {
read_bytes(r).map(u32::from_le_bytes)
}
pub(crate) fn read_u64<R: Read>(r: &mut R) -> io::Result<u64> {
read_bytes(r).map(u64::from_le_bytes)
}
pub(crate) fn read_varint<R: Read, U: sealed::VarInt>(r: &mut R) -> io::Result<U> {
let mut bits = 0;
let mut res = 0;
while {
let b = read_byte(r)?;
if (bits != 0) && (b == 0) {
Err(io::Error::other("non-canonical varint"))?;
}
if ((bits + 7) >= U::BITS) && (b >= (1 << (U::BITS - bits))) {
Err(io::Error::other("varint overflow"))?;
}
res += u64::from(b & (!VARINT_CONTINUATION_MASK)) << bits;
bits += 7;
b & VARINT_CONTINUATION_MASK == VARINT_CONTINUATION_MASK
} {}
res.try_into().map_err(|_| io::Error::other("VarInt does not fit into integer type"))
}
// All scalar fields supported by monero-serai are checked to be canonical for valid transactions
// While from_bytes_mod_order would be more flexible, it's not currently needed and would be
// inaccurate to include now. While casting a wide net may be preferable, it'd also be inaccurate
// for now. There's also further edge cases as noted by
// https://github.com/monero-project/monero/issues/8438, where some scalars had an archaic
// reduction applied
pub(crate) fn read_scalar<R: Read>(r: &mut R) -> io::Result<Scalar> {
Option::from(Scalar::from_canonical_bytes(read_bytes(r)?))
.ok_or_else(|| io::Error::other("unreduced scalar"))
}
pub(crate) fn read_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
let bytes = read_bytes(r)?;
decompress_point(bytes).ok_or_else(|| io::Error::other("invalid point"))
}
pub(crate) fn read_torsion_free_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
read_point(r)
.ok()
.filter(EdwardsPoint::is_torsion_free)
.ok_or_else(|| io::Error::other("invalid point"))
}
pub(crate) fn read_raw_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
len: usize,
r: &mut R,
) -> io::Result<Vec<T>> {
let mut res = vec![];
for _ in 0 .. len {
res.push(f(r)?);
}
Ok(res)
}
pub(crate) fn read_array<R: Read, T: Debug, F: Fn(&mut R) -> io::Result<T>, const N: usize>(
f: F,
r: &mut R,
) -> io::Result<[T; N]> {
read_raw_vec(f, N, r).map(|vec| vec.try_into().unwrap())
}
pub(crate) fn read_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
r: &mut R,
) -> io::Result<Vec<T>> {
read_raw_vec(f, read_varint(r)?, r)
}

View File

@@ -1,95 +0,0 @@
use hex_literal::hex;
use rand_core::OsRng;
use curve25519_dalek::scalar::Scalar;
use monero_generators::decompress_point;
use multiexp::BatchVerifier;
use crate::{
Commitment, random_scalar,
ringct::bulletproofs::{Bulletproofs, original::OriginalStruct},
};
mod plus;
#[test]
fn bulletproofs_vector() {
let scalar = |scalar| Scalar::from_canonical_bytes(scalar).unwrap();
let point = |point| decompress_point(point).unwrap();
// Generated from Monero
assert!(Bulletproofs::Original(OriginalStruct {
A: point(hex!("ef32c0b9551b804decdcb107eb22aa715b7ce259bf3c5cac20e24dfa6b28ac71")),
S: point(hex!("e1285960861783574ee2b689ae53622834eb0b035d6943103f960cd23e063fa0")),
T1: point(hex!("4ea07735f184ba159d0e0eb662bac8cde3eb7d39f31e567b0fbda3aa23fe5620")),
T2: point(hex!("b8390aa4b60b255630d40e592f55ec6b7ab5e3a96bfcdcd6f1cd1d2fc95f441e")),
taux: scalar(hex!("5957dba8ea9afb23d6e81cc048a92f2d502c10c749dc1b2bd148ae8d41ec7107")),
mu: scalar(hex!("923023b234c2e64774b820b4961f7181f6c1dc152c438643e5a25b0bf271bc02")),
L: vec![
point(hex!("c45f656316b9ebf9d357fb6a9f85b5f09e0b991dd50a6e0ae9b02de3946c9d99")),
point(hex!("9304d2bf0f27183a2acc58cc755a0348da11bd345485fda41b872fee89e72aac")),
point(hex!("1bb8b71925d155dd9569f64129ea049d6149fdc4e7a42a86d9478801d922129b")),
point(hex!("5756a7bf887aa72b9a952f92f47182122e7b19d89e5dd434c747492b00e1c6b7")),
point(hex!("6e497c910d102592830555356af5ff8340e8d141e3fb60ea24cfa587e964f07d")),
point(hex!("f4fa3898e7b08e039183d444f3d55040f3c790ed806cb314de49f3068bdbb218")),
point(hex!("0bbc37597c3ead517a3841e159c8b7b79a5ceaee24b2a9a20350127aab428713")),
],
R: vec![
point(hex!("609420ba1702781692e84accfd225adb3d077aedc3cf8125563400466b52dbd9")),
point(hex!("fb4e1d079e7a2b0ec14f7e2a3943bf50b6d60bc346a54fcf562fb234b342abf8")),
point(hex!("6ae3ac97289c48ce95b9c557289e82a34932055f7f5e32720139824fe81b12e5")),
point(hex!("d071cc2ffbdab2d840326ad15f68c01da6482271cae3cf644670d1632f29a15c")),
point(hex!("e52a1754b95e1060589ba7ce0c43d0060820ebfc0d49dc52884bc3c65ad18af5")),
point(hex!("41573b06140108539957df71aceb4b1816d2409ce896659aa5c86f037ca5e851")),
point(hex!("a65970b2cc3c7b08b2b5b739dbc8e71e646783c41c625e2a5b1535e3d2e0f742")),
],
a: scalar(hex!("0077c5383dea44d3cd1bc74849376bd60679612dc4b945255822457fa0c0a209")),
b: scalar(hex!("fe80cf5756473482581e1d38644007793ddc66fdeb9404ec1689a907e4863302")),
t: scalar(hex!("40dfb08e09249040df997851db311bd6827c26e87d6f0f332c55be8eef10e603"))
})
.verify(
&mut OsRng,
&[
// For some reason, these vectors are * INV_EIGHT
point(hex!("8e8f23f315edae4f6c2f948d9a861e0ae32d356b933cd11d2f0e031ac744c41f"))
.mul_by_cofactor(),
point(hex!("2829cbd025aa54cd6e1b59a032564f22f0b2e5627f7f2c4297f90da438b5510f"))
.mul_by_cofactor(),
]
));
}
macro_rules! bulletproofs_tests {
($name: ident, $max: ident, $plus: literal) => {
#[test]
fn $name() {
// Create Bulletproofs for all possible output quantities
let mut verifier = BatchVerifier::new(16);
for i in 1 ..= 16 {
let commitments = (1 ..= i)
.map(|i| Commitment::new(random_scalar(&mut OsRng), u64::try_from(i).unwrap()))
.collect::<Vec<_>>();
let bp = Bulletproofs::prove(&mut OsRng, &commitments, $plus).unwrap();
let commitments = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
assert!(bp.verify(&mut OsRng, &commitments));
assert!(bp.batch_verify(&mut OsRng, &mut verifier, i, &commitments));
}
assert!(verifier.verify_vartime());
}
#[test]
fn $max() {
// Check Bulletproofs errors if we try to prove for too many outputs
let mut commitments = vec![];
for _ in 0 .. 17 {
commitments.push(Commitment::new(Scalar::ZERO, 0));
}
assert!(Bulletproofs::prove(&mut OsRng, &commitments, $plus).is_err());
}
};
}
bulletproofs_tests!(bulletproofs, bulletproofs_max, false);
bulletproofs_tests!(bulletproofs_plus, bulletproofs_plus_max, true);

View File

@@ -1,30 +0,0 @@
use rand_core::{RngCore, OsRng};
use multiexp::BatchVerifier;
use group::ff::Field;
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::{
Commitment,
ringct::bulletproofs::plus::aggregate_range_proof::{
AggregateRangeStatement, AggregateRangeWitness,
},
};
#[test]
fn test_aggregate_range_proof() {
let mut verifier = BatchVerifier::new(16);
for m in 1 ..= 16 {
let mut commitments = vec![];
for _ in 0 .. m {
commitments.push(Commitment::new(*Scalar::random(&mut OsRng), OsRng.next_u64()));
}
let commitment_points = commitments.iter().map(|com| EdwardsPoint(com.calculate())).collect();
let statement = AggregateRangeStatement::new(commitment_points).unwrap();
let witness = AggregateRangeWitness::new(&commitments).unwrap();
let proof = statement.clone().prove(&mut OsRng, &witness).unwrap();
statement.verify(&mut OsRng, &mut verifier, (), proof);
}
assert!(verifier.verify_vartime());
}

View File

@@ -1,6 +0,0 @@
mod unreduced_scalar;
mod clsag;
mod bulletproofs;
mod address;
mod seed;
mod extra;

View File

@@ -1,482 +0,0 @@
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::scalar::Scalar;
use crate::{
hash,
wallet::seed::{
Seed, SeedType, SeedError,
classic::{self, trim_by_lang},
polyseed,
},
};
#[test]
fn test_classic_seed() {
struct Vector {
language: classic::Language,
seed: String,
spend: String,
view: String,
}
let vectors = [
Vector {
language: classic::Language::Chinese,
seed: "摇 曲 艺 武 滴 然 效 似 赏 式 祥 歌 买 疑 小 碧 堆 博 键 房 鲜 悲 付 喷 武".into(),
spend: "a5e4fff1706ef9212993a69f246f5c95ad6d84371692d63e9bb0ea112a58340d".into(),
view: "1176c43ce541477ea2f3ef0b49b25112b084e26b8a843e1304ac4677b74cdf02".into(),
},
Vector {
language: classic::Language::English,
seed: "washing thirsty occur lectures tuesday fainted toxic adapt \
abnormal memoir nylon mostly building shrugged online ember northern \
ruby woes dauntless boil family illness inroads northern"
.into(),
spend: "c0af65c0dd837e666b9d0dfed62745f4df35aed7ea619b2798a709f0fe545403".into(),
view: "513ba91c538a5a9069e0094de90e927c0cd147fa10428ce3ac1afd49f63e3b01".into(),
},
Vector {
language: classic::Language::Dutch,
seed: "setwinst riphagen vimmetje extase blief tuitelig fuiven meifeest \
ponywagen zesmaal ripdeal matverf codetaal leut ivoor rotten \
wisgerhof winzucht typograaf atrium rein zilt traktaat verzaagd setwinst"
.into(),
spend: "e2d2873085c447c2bc7664222ac8f7d240df3aeac137f5ff2022eaa629e5b10a".into(),
view: "eac30b69477e3f68093d131c7fd961564458401b07f8c87ff8f6030c1a0c7301".into(),
},
Vector {
language: classic::Language::French,
seed: "poids vaseux tarte bazar poivre effet entier nuance \
sensuel ennui pacte osselet poudre battre alibi mouton \
stade paquet pliage gibier type question position projet pliage"
.into(),
spend: "2dd39ff1a4628a94b5c2ec3e42fb3dfe15c2b2f010154dc3b3de6791e805b904".into(),
view: "6725b32230400a1032f31d622b44c3a227f88258939b14a7c72e00939e7bdf0e".into(),
},
Vector {
language: classic::Language::Spanish,
seed: "minero ocupar mirar evadir octubre cal logro miope \
opaco disco ancla litio clase cuello nasal clase \
fiar avance deseo mente grumo negro cordón croqueta clase"
.into(),
spend: "ae2c9bebdddac067d73ec0180147fc92bdf9ac7337f1bcafbbe57dd13558eb02".into(),
view: "18deafb34d55b7a43cae2c1c1c206a3c80c12cc9d1f84640b484b95b7fec3e05".into(),
},
Vector {
language: classic::Language::German,
seed: "Kaliber Gabelung Tapir Liveband Favorit Specht Enklave Nabel \
Jupiter Foliant Chronik nisten löten Vase Aussage Rekord \
Yeti Gesetz Eleganz Alraune Künstler Almweide Jahr Kastanie Almweide"
.into(),
spend: "79801b7a1b9796856e2397d862a113862e1fdc289a205e79d8d70995b276db06".into(),
view: "99f0ec556643bd9c038a4ed86edcb9c6c16032c4622ed2e000299d527a792701".into(),
},
Vector {
language: classic::Language::Italian,
seed: "cavo pancetta auto fulmine alleanza filmato diavolo prato \
forzare meritare litigare lezione segreto evasione votare buio \
licenza cliente dorso natale crescere vento tutelare vetta evasione"
.into(),
spend: "5e7fd774eb00fa5877e2a8b4dc9c7ffe111008a3891220b56a6e49ac816d650a".into(),
view: "698a1dce6018aef5516e82ca0cb3e3ec7778d17dfb41a137567bfa2e55e63a03".into(),
},
Vector {
language: classic::Language::Portuguese,
seed: "agito eventualidade onus itrio holograma sodomizar objetos dobro \
iugoslavo bcrepuscular odalisca abjeto iuane darwinista eczema acetona \
cibernetico hoquei gleba driver buffer azoto megera nogueira agito"
.into(),
spend: "13b3115f37e35c6aa1db97428b897e584698670c1b27854568d678e729200c0f".into(),
view: "ad1b4fd35270f5f36c4da7166672b347e75c3f4d41346ec2a06d1d0193632801".into(),
},
Vector {
language: classic::Language::Japanese,
seed: "ぜんぶ どうぐ おたがい せんきょ おうじ そんちょう じゅしん いろえんぴつ \
かほう つかれる えらぶ にちじょう くのう にちようび ぬまえび さんきゃく \
おおや ちぬき うすめる いがく せつでん さうな すいえい せつだん おおや"
.into(),
spend: "c56e895cdb13007eda8399222974cdbab493640663804b93cbef3d8c3df80b0b".into(),
view: "6c3634a313ec2ee979d565c33888fd7c3502d696ce0134a8bc1a2698c7f2c508".into(),
},
Vector {
language: classic::Language::Russian,
seed: "шатер икра нация ехать получать инерция доза реальный \
рыжий таможня лопата душа веселый клетка атлас лекция \
обгонять паек наивный лыжный дурак стать ежик задача паек"
.into(),
spend: "7cb5492df5eb2db4c84af20766391cd3e3662ab1a241c70fc881f3d02c381f05".into(),
view: "fcd53e41ec0df995ab43927f7c44bc3359c93523d5009fb3f5ba87431d545a03".into(),
},
Vector {
language: classic::Language::Esperanto,
seed: "ukazo klini peco etikedo fabriko imitado onklino urino \
pudro incidento kumuluso ikono smirgi hirundo uretro krii \
sparkado super speciala pupo alpinisto cvana vokegi zombio fabriko"
.into(),
spend: "82ebf0336d3b152701964ed41df6b6e9a035e57fc98b84039ed0bd4611c58904".into(),
view: "cd4d120e1ea34360af528f6a3e6156063312d9cefc9aa6b5218d366c0ed6a201".into(),
},
Vector {
language: classic::Language::Lojban,
seed: "jetnu vensa julne xrotu xamsi julne cutci dakli \
mlatu xedja muvgau palpi xindo sfubu ciste cinri \
blabi darno dembi janli blabi fenki bukpu burcu blabi"
.into(),
spend: "e4f8c6819ab6cf792cebb858caabac9307fd646901d72123e0367ebc0a79c200".into(),
view: "c806ce62bafaa7b2d597f1a1e2dbe4a2f96bfd804bf6f8420fc7f4a6bd700c00".into(),
},
Vector {
language: classic::Language::EnglishOld,
seed: "glorious especially puff son moment add youth nowhere \
throw glide grip wrong rhythm consume very swear \
bitter heavy eventually begin reason flirt type unable"
.into(),
spend: "647f4765b66b636ff07170ab6280a9a6804dfbaf19db2ad37d23be024a18730b".into(),
view: "045da65316a906a8c30046053119c18020b07a7a3a6ef5c01ab2a8755416bd02".into(),
},
// The following seeds require the language specification in order to calculate
// a single valid checksum
Vector {
language: classic::Language::Spanish,
seed: "pluma laico atraer pintor peor cerca balde buscar \
lancha batir nulo reloj resto gemelo nevera poder columna gol \
oveja latir amplio bolero feliz fuerza nevera"
.into(),
spend: "30303983fc8d215dd020cc6b8223793318d55c466a86e4390954f373fdc7200a".into(),
view: "97c649143f3c147ba59aa5506cc09c7992c5c219bb26964442142bf97980800e".into(),
},
Vector {
language: classic::Language::Spanish,
seed: "pluma pluma pluma pluma pluma pluma pluma pluma \
pluma pluma pluma pluma pluma pluma pluma pluma \
pluma pluma pluma pluma pluma pluma pluma pluma pluma"
.into(),
spend: "b4050000b4050000b4050000b4050000b4050000b4050000b4050000b4050000".into(),
view: "d73534f7912b395eb70ef911791a2814eb6df7ce56528eaaa83ff2b72d9f5e0f".into(),
},
Vector {
language: classic::Language::English,
seed: "plus plus plus plus plus plus plus plus \
plus plus plus plus plus plus plus plus \
plus plus plus plus plus plus plus plus plus"
.into(),
spend: "3b0400003b0400003b0400003b0400003b0400003b0400003b0400003b040000".into(),
view: "43a8a7715eed11eff145a2024ddcc39740255156da7bbd736ee66a0838053a02".into(),
},
Vector {
language: classic::Language::Spanish,
seed: "audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio audio"
.into(),
spend: "ba000000ba000000ba000000ba000000ba000000ba000000ba000000ba000000".into(),
view: "1437256da2c85d029b293d8c6b1d625d9374969301869b12f37186e3f906c708".into(),
},
Vector {
language: classic::Language::English,
seed: "audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio audio"
.into(),
spend: "7900000079000000790000007900000079000000790000007900000079000000".into(),
view: "20bec797ab96780ae6a045dd816676ca7ed1d7c6773f7022d03ad234b581d600".into(),
},
];
for vector in vectors {
let trim_seed = |seed: &str| {
seed
.split_whitespace()
.map(|word| trim_by_lang(word, vector.language))
.collect::<Vec<_>>()
.join(" ")
};
// Test against Monero
{
println!("{}. language: {:?}, seed: {}", line!(), vector.language, vector.seed.clone());
let seed =
Seed::from_string(SeedType::Classic(vector.language), Zeroizing::new(vector.seed.clone()))
.unwrap();
let trim = trim_seed(&vector.seed);
assert_eq!(
seed,
Seed::from_string(SeedType::Classic(vector.language), Zeroizing::new(trim)).unwrap()
);
let spend: [u8; 32] = hex::decode(vector.spend).unwrap().try_into().unwrap();
// For classical seeds, Monero directly uses the entropy as a spend key
assert_eq!(
Option::<Scalar>::from(Scalar::from_canonical_bytes(*seed.entropy())),
Option::<Scalar>::from(Scalar::from_canonical_bytes(spend)),
);
let view: [u8; 32] = hex::decode(vector.view).unwrap().try_into().unwrap();
// Monero then derives the view key as H(spend)
assert_eq!(
Scalar::from_bytes_mod_order(hash(&spend)),
Scalar::from_canonical_bytes(view).unwrap()
);
assert_eq!(
Seed::from_entropy(SeedType::Classic(vector.language), Zeroizing::new(spend), None)
.unwrap(),
seed
);
}
// Test against ourselves
{
let seed = Seed::new(&mut OsRng, SeedType::Classic(vector.language));
println!("{}. seed: {}", line!(), *seed.to_string());
let trim = trim_seed(&seed.to_string());
assert_eq!(
seed,
Seed::from_string(SeedType::Classic(vector.language), Zeroizing::new(trim)).unwrap()
);
assert_eq!(
seed,
Seed::from_entropy(SeedType::Classic(vector.language), seed.entropy(), None).unwrap()
);
assert_eq!(
seed,
Seed::from_string(SeedType::Classic(vector.language), seed.to_string()).unwrap()
);
}
}
}
#[test]
fn test_polyseed() {
struct Vector {
language: polyseed::Language,
seed: String,
entropy: String,
birthday: u64,
has_prefix: bool,
has_accent: bool,
}
let vectors = [
Vector {
language: polyseed::Language::English,
seed: "raven tail swear infant grief assist regular lamp \
duck valid someone little harsh puppy airport language"
.into(),
entropy: "dd76e7359a0ded37cd0ff0f3c829a5ae01673300000000000000000000000000".into(),
birthday: 1638446400,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Spanish,
seed: "eje fin parte célebre tabú pestaña lienzo puma \
prisión hora regalo lengua existir lápiz lote sonoro"
.into(),
entropy: "5a2b02df7db21fcbe6ec6df137d54c7b20fd2b00000000000000000000000000".into(),
birthday: 3118651200,
has_prefix: true,
has_accent: true,
},
Vector {
language: polyseed::Language::French,
seed: "valable arracher décaler jeudi amusant dresser mener épaissir risible \
prouesse réserve ampleur ajuster muter caméra enchère"
.into(),
entropy: "11cfd870324b26657342c37360c424a14a050b00000000000000000000000000".into(),
birthday: 1679314966,
has_prefix: true,
has_accent: true,
},
Vector {
language: polyseed::Language::Italian,
seed: "caduco midollo copione meninge isotopo illogico riflesso tartaruga fermento \
olandese normale tristezza episodio voragine forbito achille"
.into(),
entropy: "7ecc57c9b4652d4e31428f62bec91cfd55500600000000000000000000000000".into(),
birthday: 1679316358,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Portuguese,
seed: "caverna custear azedo adeus senador apertada sedoso omitir \
sujeito aurora videira molho cartaz gesso dentista tapar"
.into(),
entropy: "45473063711376cae38f1b3eba18c874124e1d00000000000000000000000000".into(),
birthday: 1679316657,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Czech,
seed: "usmrtit nora dotaz komunita zavalit funkce mzda sotva akce \
vesta kabel herna stodola uvolnit ustrnout email"
.into(),
entropy: "7ac8a4efd62d9c3c4c02e350d32326df37821c00000000000000000000000000".into(),
birthday: 1679316898,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Korean,
seed: "전망 선풍기 국제 무궁화 설사 기름 이론적 해안 절망 예선 \
지우개 보관 절망 말기 시각 귀신"
.into(),
entropy: "684663fda420298f42ed94b2c512ed38ddf12b00000000000000000000000000".into(),
birthday: 1679317073,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::Japanese,
seed: "うちあわせ ちつじょ つごう しはい けんこう とおる てみやげ はんとし たんとう \
といれ おさない おさえる むかう ぬぐう なふだ せまる"
.into(),
entropy: "94e6665518a6286c6e3ba508a2279eb62b771f00000000000000000000000000".into(),
birthday: 1679318722,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::ChineseTraditional,
seed: "亂 挖 斤 柄 代 圈 枝 轄 魯 論 函 開 勘 番 榮 壁".into(),
entropy: "b1594f585987ab0fd5a31da1f0d377dae5283f00000000000000000000000000".into(),
birthday: 1679426433,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::ChineseSimplified,
seed: "啊 百 族 府 票 划 伪 仓 叶 虾 借 溜 晨 左 等 鬼".into(),
entropy: "21cdd366f337b89b8d1bc1df9fe73047c22b0300000000000000000000000000".into(),
birthday: 1679426817,
has_prefix: false,
has_accent: false,
},
// The following seed requires the language specification in order to calculate
// a single valid checksum
Vector {
language: polyseed::Language::Spanish,
seed: "impo sort usua cabi venu nobl oliv clim \
cont barr marc auto prod vaca torn fati"
.into(),
entropy: "dbfce25fe09b68a340e01c62417eeef43ad51800000000000000000000000000".into(),
birthday: 1701511650,
has_prefix: true,
has_accent: true,
},
];
for vector in vectors {
let add_whitespace = |mut seed: String| {
seed.push(' ');
seed
};
let seed_without_accents = |seed: &str| {
seed
.split_whitespace()
.map(|w| w.chars().filter(char::is_ascii).collect::<String>())
.collect::<Vec<_>>()
.join(" ")
};
let trim_seed = |seed: &str| {
let seed_to_trim =
if vector.has_accent { seed_without_accents(seed) } else { seed.to_string() };
seed_to_trim
.split_whitespace()
.map(|w| {
let mut ascii = 0;
let mut to_take = w.len();
for (i, char) in w.chars().enumerate() {
if char.is_ascii() {
ascii += 1;
}
if ascii == polyseed::PREFIX_LEN {
// +1 to include this character, which put us at the prefix length
to_take = i + 1;
break;
}
}
w.chars().take(to_take).collect::<String>()
})
.collect::<Vec<_>>()
.join(" ")
};
// String -> Seed
println!("{}. language: {:?}, seed: {}", line!(), vector.language, vector.seed.clone());
let seed =
Seed::from_string(SeedType::Polyseed(vector.language), Zeroizing::new(vector.seed.clone()))
.unwrap();
let trim = trim_seed(&vector.seed);
let add_whitespace = add_whitespace(vector.seed.clone());
let seed_without_accents = seed_without_accents(&vector.seed);
// Make sure a version with added whitespace still works
let whitespaced_seed =
Seed::from_string(SeedType::Polyseed(vector.language), Zeroizing::new(add_whitespace))
.unwrap();
assert_eq!(seed, whitespaced_seed);
// Check trimmed versions works
if vector.has_prefix {
let trimmed_seed =
Seed::from_string(SeedType::Polyseed(vector.language), Zeroizing::new(trim)).unwrap();
assert_eq!(seed, trimmed_seed);
}
// Check versions without accents work
if vector.has_accent {
let seed_without_accents = Seed::from_string(
SeedType::Polyseed(vector.language),
Zeroizing::new(seed_without_accents),
)
.unwrap();
assert_eq!(seed, seed_without_accents);
}
let entropy = Zeroizing::new(hex::decode(vector.entropy).unwrap().try_into().unwrap());
assert_eq!(seed.entropy(), entropy);
assert!(seed.birthday().abs_diff(vector.birthday) < polyseed::TIME_STEP);
// Entropy -> Seed
let from_entropy =
Seed::from_entropy(SeedType::Polyseed(vector.language), entropy, Some(seed.birthday()))
.unwrap();
assert_eq!(seed.to_string(), from_entropy.to_string());
// Check against ourselves
{
let seed = Seed::new(&mut OsRng, SeedType::Polyseed(vector.language));
println!("{}. seed: {}", line!(), *seed.to_string());
assert_eq!(
seed,
Seed::from_string(SeedType::Polyseed(vector.language), seed.to_string()).unwrap()
);
assert_eq!(
seed,
Seed::from_entropy(
SeedType::Polyseed(vector.language),
seed.entropy(),
Some(seed.birthday())
)
.unwrap()
);
}
}
}
#[test]
fn test_invalid_polyseed() {
// This seed includes unsupported features bits and should error on decode
let seed = "include domain claim resemble urban hire lunch bird \
crucial fire best wife ring warm ignore model"
.into();
let res =
Seed::from_string(SeedType::Polyseed(polyseed::Language::English), Zeroizing::new(seed));
assert_eq!(res, Err(SeedError::UnsupportedFeatures));
}

View File

@@ -1,432 +0,0 @@
use core::cmp::Ordering;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use crate::{
Protocol, hash,
serialize::*,
ring_signatures::RingSignature,
ringct::{bulletproofs::Bulletproofs, RctType, RctBase, RctPrunable, RctSignatures},
};
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Input {
Gen(u64),
ToKey { amount: Option<u64>, key_offsets: Vec<u64>, key_image: EdwardsPoint },
}
impl Input {
pub(crate) fn fee_weight(offsets_weight: usize) -> usize {
// Uses 1 byte for the input type
// Uses 1 byte for the VarInt amount due to amount being 0
1 + 1 + offsets_weight + 32
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Input::Gen(height) => {
w.write_all(&[255])?;
write_varint(height, w)
}
Input::ToKey { amount, key_offsets, key_image } => {
w.write_all(&[2])?;
write_varint(&amount.unwrap_or(0), w)?;
write_vec(write_varint, key_offsets, w)?;
write_point(key_image, w)
}
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Input> {
Ok(match read_byte(r)? {
255 => Input::Gen(read_varint(r)?),
2 => {
let amount = read_varint(r)?;
// https://github.com/monero-project/monero/
// blob/00fd416a99686f0956361d1cd0337fe56e58d4a7/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L860-L863
// A non-RCT 0-amount input can't exist because only RCT TXs can have a 0-amount output
// That's why collapsing to None if the amount is 0 is safe, even without knowing if RCT
let amount = if amount == 0 { None } else { Some(amount) };
Input::ToKey {
amount,
key_offsets: read_vec(read_varint, r)?,
key_image: read_torsion_free_point(r)?,
}
}
_ => Err(io::Error::other("Tried to deserialize unknown/unused input type"))?,
})
}
}
// Doesn't bother moving to an enum for the unused Script classes
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Output {
pub amount: Option<u64>,
pub key: CompressedEdwardsY,
pub view_tag: Option<u8>,
}
impl Output {
pub(crate) fn fee_weight(view_tags: bool) -> usize {
// Uses 1 byte for the output type
// Uses 1 byte for the VarInt amount due to amount being 0
1 + 1 + 32 + if view_tags { 1 } else { 0 }
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.amount.unwrap_or(0), w)?;
w.write_all(&[2 + u8::from(self.view_tag.is_some())])?;
w.write_all(&self.key.to_bytes())?;
if let Some(view_tag) = self.view_tag {
w.write_all(&[view_tag])?;
}
Ok(())
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(8 + 1 + 32);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(rct: bool, r: &mut R) -> io::Result<Output> {
let amount = read_varint(r)?;
let amount = if rct {
if amount != 0 {
Err(io::Error::other("RCT TX output wasn't 0"))?;
}
None
} else {
Some(amount)
};
let view_tag = match read_byte(r)? {
2 => false,
3 => true,
_ => Err(io::Error::other("Tried to deserialize unknown/unused output type"))?,
};
Ok(Output {
amount,
key: CompressedEdwardsY(read_bytes(r)?),
view_tag: if view_tag { Some(read_byte(r)?) } else { None },
})
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum Timelock {
None,
Block(usize),
Time(u64),
}
impl Timelock {
fn from_raw(raw: u64) -> Timelock {
if raw == 0 {
Timelock::None
} else if raw < 500_000_000 {
Timelock::Block(usize::try_from(raw).unwrap())
} else {
Timelock::Time(raw)
}
}
fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(
&match self {
Timelock::None => 0,
Timelock::Block(block) => (*block).try_into().unwrap(),
Timelock::Time(time) => *time,
},
w,
)
}
}
impl PartialOrd for Timelock {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
match (self, other) {
(Timelock::None, Timelock::None) => Some(Ordering::Equal),
(Timelock::None, _) => Some(Ordering::Less),
(_, Timelock::None) => Some(Ordering::Greater),
(Timelock::Block(a), Timelock::Block(b)) => a.partial_cmp(b),
(Timelock::Time(a), Timelock::Time(b)) => a.partial_cmp(b),
_ => None,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct TransactionPrefix {
pub version: u64,
pub timelock: Timelock,
pub inputs: Vec<Input>,
pub outputs: Vec<Output>,
pub extra: Vec<u8>,
}
impl TransactionPrefix {
pub(crate) fn fee_weight(
decoy_weights: &[usize],
outputs: usize,
view_tags: bool,
extra: usize,
) -> usize {
// Assumes Timelock::None since this library won't let you create a TX with a timelock
// 1 input for every decoy weight
1 + 1 +
varint_len(decoy_weights.len()) +
decoy_weights.iter().map(|&offsets_weight| Input::fee_weight(offsets_weight)).sum::<usize>() +
varint_len(outputs) +
(outputs * Output::fee_weight(view_tags)) +
varint_len(extra) +
extra
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.version, w)?;
self.timelock.write(w)?;
write_vec(Input::write, &self.inputs, w)?;
write_vec(Output::write, &self.outputs, w)?;
write_varint(&self.extra.len(), w)?;
w.write_all(&self.extra)
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<TransactionPrefix> {
let version = read_varint(r)?;
// TODO: Create an enum out of version
if (version == 0) || (version > 2) {
Err(io::Error::other("unrecognized transaction version"))?;
}
let timelock = Timelock::from_raw(read_varint(r)?);
let inputs = read_vec(|r| Input::read(r), r)?;
if inputs.is_empty() {
Err(io::Error::other("transaction had no inputs"))?;
}
let is_miner_tx = matches!(inputs[0], Input::Gen { .. });
let mut prefix = TransactionPrefix {
version,
timelock,
inputs,
outputs: read_vec(|r| Output::read((!is_miner_tx) && (version == 2), r), r)?,
extra: vec![],
};
prefix.extra = read_vec(read_byte, r)?;
Ok(prefix)
}
pub fn hash(&self) -> [u8; 32] {
hash(&self.serialize())
}
}
/// Monero transaction. For version 1, rct_signatures still contains an accurate fee value.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Transaction {
pub prefix: TransactionPrefix,
pub signatures: Vec<RingSignature>,
pub rct_signatures: RctSignatures,
}
impl Transaction {
pub(crate) fn fee_weight(
protocol: Protocol,
decoy_weights: &[usize],
outputs: usize,
extra: usize,
fee: u64,
) -> usize {
TransactionPrefix::fee_weight(decoy_weights, outputs, protocol.view_tags(), extra) +
RctSignatures::fee_weight(protocol, decoy_weights.len(), outputs, fee)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.prefix.write(w)?;
if self.prefix.version == 1 {
for ring_sig in &self.signatures {
ring_sig.write(w)?;
}
Ok(())
} else if self.prefix.version == 2 {
self.rct_signatures.write(w)
} else {
panic!("Serializing a transaction with an unknown version");
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(2048);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Transaction> {
let prefix = TransactionPrefix::read(r)?;
let mut signatures = vec![];
let mut rct_signatures = RctSignatures {
base: RctBase { fee: 0, encrypted_amounts: vec![], pseudo_outs: vec![], commitments: vec![] },
prunable: RctPrunable::Null,
};
if prefix.version == 1 {
signatures = prefix
.inputs
.iter()
.filter_map(|input| match input {
Input::ToKey { key_offsets, .. } => Some(RingSignature::read(key_offsets.len(), r)),
_ => None,
})
.collect::<Result<_, _>>()?;
if !matches!(prefix.inputs[0], Input::Gen(..)) {
let in_amount = prefix
.inputs
.iter()
.map(|input| match input {
Input::Gen(..) => Err(io::Error::other("Input::Gen present in non-coinbase v1 TX"))?,
// v1 TXs can burn v2 outputs
// dcff3fe4f914d6b6bd4a5b800cc4cca8f2fdd1bd73352f0700d463d36812f328 is one such TX
// It includes a pre-RCT signature for a RCT output, yet if you interpret the RCT
// output as being worth 0, it passes a sum check (guaranteed since no outputs are RCT)
Input::ToKey { amount, .. } => Ok(amount.unwrap_or(0)),
})
.collect::<io::Result<Vec<_>>>()?
.into_iter()
.sum::<u64>();
let mut out = 0;
for output in &prefix.outputs {
if output.amount.is_none() {
Err(io::Error::other("v1 transaction had a 0-amount output"))?;
}
out += output.amount.unwrap();
}
if in_amount < out {
Err(io::Error::other("transaction spent more than it had as inputs"))?;
}
rct_signatures.base.fee = in_amount - out;
}
} else if prefix.version == 2 {
rct_signatures = RctSignatures::read(
prefix.inputs.first().map_or(0, |input| match input {
Input::Gen(_) => 0,
Input::ToKey { key_offsets, .. } => key_offsets.len(),
}),
prefix.inputs.len(),
prefix.outputs.len(),
r,
)?;
} else {
Err(io::Error::other("Tried to deserialize unknown version"))?;
}
Ok(Transaction { prefix, signatures, rct_signatures })
}
pub fn hash(&self) -> [u8; 32] {
let mut buf = Vec::with_capacity(2048);
if self.prefix.version == 1 {
self.write(&mut buf).unwrap();
hash(&buf)
} else {
let mut hashes = Vec::with_capacity(96);
hashes.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hashes.extend(hash(&buf));
buf.clear();
hashes.extend(&match self.rct_signatures.prunable {
RctPrunable::Null => [0; 32],
_ => {
self.rct_signatures.prunable.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hash(&buf)
}
});
hash(&hashes)
}
}
/// Calculate the hash of this transaction as needed for signing it.
pub fn signature_hash(&self) -> [u8; 32] {
if self.prefix.version == 1 {
return self.prefix.hash();
}
let mut buf = Vec::with_capacity(2048);
let mut sig_hash = Vec::with_capacity(96);
sig_hash.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
sig_hash.extend(hash(&buf));
buf.clear();
self.rct_signatures.prunable.signature_write(&mut buf).unwrap();
sig_hash.extend(hash(&buf));
hash(&sig_hash)
}
fn is_rct_bulletproof(&self) -> bool {
match &self.rct_signatures.rct_type() {
RctType::Bulletproofs | RctType::BulletproofsCompactAmount | RctType::Clsag => true,
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::BulletproofsPlus => false,
}
}
fn is_rct_bulletproof_plus(&self) -> bool {
match &self.rct_signatures.rct_type() {
RctType::BulletproofsPlus => true,
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag => false,
}
}
/// Calculate the transaction's weight.
pub fn weight(&self) -> usize {
let blob_size = self.serialize().len();
let bp = self.is_rct_bulletproof();
let bp_plus = self.is_rct_bulletproof_plus();
if !(bp || bp_plus) {
blob_size
} else {
blob_size + Bulletproofs::calculate_bp_clawback(bp_plus, self.prefix.outputs.len()).0
}
}
}

View File

@@ -1,325 +0,0 @@
use core::{marker::PhantomData, fmt::Debug};
use std_shims::string::{String, ToString};
use zeroize::Zeroize;
use curve25519_dalek::edwards::EdwardsPoint;
use monero_generators::decompress_point;
use base58_monero::base58::{encode_check, decode_check};
/// The network this address is for.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum Network {
Mainnet,
Testnet,
Stagenet,
}
/// The address type, supporting the officially documented addresses, along with
/// [Featured Addresses](https://gist.github.com/kayabaNerve/01c50bbc35441e0bbdcee63a9d823789).
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum AddressType {
Standard,
Integrated([u8; 8]),
Subaddress,
Featured { subaddress: bool, payment_id: Option<[u8; 8]>, guaranteed: bool },
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub struct SubaddressIndex {
pub(crate) account: u32,
pub(crate) address: u32,
}
impl SubaddressIndex {
pub const fn new(account: u32, address: u32) -> Option<SubaddressIndex> {
if (account == 0) && (address == 0) {
return None;
}
Some(SubaddressIndex { account, address })
}
pub fn account(&self) -> u32 {
self.account
}
pub fn address(&self) -> u32 {
self.address
}
}
/// Address specification. Used internally to create addresses.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum AddressSpec {
Standard,
Integrated([u8; 8]),
Subaddress(SubaddressIndex),
Featured { subaddress: Option<SubaddressIndex>, payment_id: Option<[u8; 8]>, guaranteed: bool },
}
impl AddressType {
pub fn is_subaddress(&self) -> bool {
matches!(self, AddressType::Subaddress) ||
matches!(self, AddressType::Featured { subaddress: true, .. })
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
if let AddressType::Integrated(id) = self {
Some(*id)
} else if let AddressType::Featured { payment_id, .. } = self {
*payment_id
} else {
None
}
}
pub fn is_guaranteed(&self) -> bool {
matches!(self, AddressType::Featured { guaranteed: true, .. })
}
}
/// A type which returns the byte for a given address.
pub trait AddressBytes: Clone + Copy + PartialEq + Eq + Debug {
fn network_bytes(network: Network) -> (u8, u8, u8, u8);
}
/// Address bytes for Monero.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct MoneroAddressBytes;
impl AddressBytes for MoneroAddressBytes {
fn network_bytes(network: Network) -> (u8, u8, u8, u8) {
match network {
Network::Mainnet => (18, 19, 42, 70),
Network::Testnet => (53, 54, 63, 111),
Network::Stagenet => (24, 25, 36, 86),
}
}
}
/// Address metadata.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub struct AddressMeta<B: AddressBytes> {
_bytes: PhantomData<B>,
pub network: Network,
pub kind: AddressType,
}
impl<B: AddressBytes> Zeroize for AddressMeta<B> {
fn zeroize(&mut self) {
self.network.zeroize();
self.kind.zeroize();
}
}
/// Error when decoding an address.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum AddressError {
#[cfg_attr(feature = "std", error("invalid address byte"))]
InvalidByte,
#[cfg_attr(feature = "std", error("invalid address encoding"))]
InvalidEncoding,
#[cfg_attr(feature = "std", error("invalid length"))]
InvalidLength,
#[cfg_attr(feature = "std", error("invalid key"))]
InvalidKey,
#[cfg_attr(feature = "std", error("unknown features"))]
UnknownFeatures,
#[cfg_attr(feature = "std", error("different network than expected"))]
DifferentNetwork,
}
impl<B: AddressBytes> AddressMeta<B> {
#[allow(clippy::wrong_self_convention)]
fn to_byte(&self) -> u8 {
let bytes = B::network_bytes(self.network);
match self.kind {
AddressType::Standard => bytes.0,
AddressType::Integrated(_) => bytes.1,
AddressType::Subaddress => bytes.2,
AddressType::Featured { .. } => bytes.3,
}
}
/// Create an address's metadata.
pub fn new(network: Network, kind: AddressType) -> Self {
AddressMeta { _bytes: PhantomData, network, kind }
}
// Returns an incomplete instantiation in the case of Integrated/Featured addresses
fn from_byte(byte: u8) -> Result<Self, AddressError> {
let mut meta = None;
for network in [Network::Mainnet, Network::Testnet, Network::Stagenet] {
let (standard, integrated, subaddress, featured) = B::network_bytes(network);
if let Some(kind) = match byte {
_ if byte == standard => Some(AddressType::Standard),
_ if byte == integrated => Some(AddressType::Integrated([0; 8])),
_ if byte == subaddress => Some(AddressType::Subaddress),
_ if byte == featured => {
Some(AddressType::Featured { subaddress: false, payment_id: None, guaranteed: false })
}
_ => None,
} {
meta = Some(AddressMeta::new(network, kind));
break;
}
}
meta.ok_or(AddressError::InvalidByte)
}
pub fn is_subaddress(&self) -> bool {
self.kind.is_subaddress()
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
self.kind.payment_id()
}
pub fn is_guaranteed(&self) -> bool {
self.kind.is_guaranteed()
}
}
/// A Monero address, composed of metadata and a spend/view key.
#[derive(Clone, Copy, PartialEq, Eq)]
pub struct Address<B: AddressBytes> {
pub meta: AddressMeta<B>,
pub spend: EdwardsPoint,
pub view: EdwardsPoint,
}
impl<B: AddressBytes> core::fmt::Debug for Address<B> {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt
.debug_struct("Address")
.field("meta", &self.meta)
.field("spend", &hex::encode(self.spend.compress().0))
.field("view", &hex::encode(self.view.compress().0))
// This is not a real field yet is the most valuable thing to know when debugging
.field("(address)", &self.to_string())
.finish()
}
}
impl<B: AddressBytes> Zeroize for Address<B> {
fn zeroize(&mut self) {
self.meta.zeroize();
self.spend.zeroize();
self.view.zeroize();
}
}
impl<B: AddressBytes> ToString for Address<B> {
fn to_string(&self) -> String {
let mut data = vec![self.meta.to_byte()];
data.extend(self.spend.compress().to_bytes());
data.extend(self.view.compress().to_bytes());
if let AddressType::Featured { subaddress, payment_id, guaranteed } = self.meta.kind {
// Technically should be a VarInt, yet we don't have enough features it's needed
data.push(
u8::from(subaddress) + (u8::from(payment_id.is_some()) << 1) + (u8::from(guaranteed) << 2),
);
}
if let Some(id) = self.meta.kind.payment_id() {
data.extend(id);
}
encode_check(&data).unwrap()
}
}
impl<B: AddressBytes> Address<B> {
pub fn new(meta: AddressMeta<B>, spend: EdwardsPoint, view: EdwardsPoint) -> Self {
Address { meta, spend, view }
}
pub fn from_str_raw(s: &str) -> Result<Self, AddressError> {
let raw = decode_check(s).map_err(|_| AddressError::InvalidEncoding)?;
if raw.len() < (1 + 32 + 32) {
Err(AddressError::InvalidLength)?;
}
let mut meta = AddressMeta::from_byte(raw[0])?;
let spend =
decompress_point(raw[1 .. 33].try_into().unwrap()).ok_or(AddressError::InvalidKey)?;
let view =
decompress_point(raw[33 .. 65].try_into().unwrap()).ok_or(AddressError::InvalidKey)?;
let mut read = 65;
if matches!(meta.kind, AddressType::Featured { .. }) {
if raw[read] >= (2 << 3) {
Err(AddressError::UnknownFeatures)?;
}
let subaddress = (raw[read] & 1) == 1;
let integrated = ((raw[read] >> 1) & 1) == 1;
let guaranteed = ((raw[read] >> 2) & 1) == 1;
meta.kind = AddressType::Featured {
subaddress,
payment_id: Some([0; 8]).filter(|_| integrated),
guaranteed,
};
read += 1;
}
// Update read early so we can verify the length
if meta.kind.payment_id().is_some() {
read += 8;
}
if raw.len() != read {
Err(AddressError::InvalidLength)?;
}
if let AddressType::Integrated(ref mut id) = meta.kind {
id.copy_from_slice(&raw[(read - 8) .. read]);
}
if let AddressType::Featured { payment_id: Some(ref mut id), .. } = meta.kind {
id.copy_from_slice(&raw[(read - 8) .. read]);
}
Ok(Address { meta, spend, view })
}
pub fn from_str(network: Network, s: &str) -> Result<Self, AddressError> {
Self::from_str_raw(s).and_then(|addr| {
if addr.meta.network == network {
Ok(addr)
} else {
Err(AddressError::DifferentNetwork)?
}
})
}
pub fn network(&self) -> Network {
self.meta.network
}
pub fn is_subaddress(&self) -> bool {
self.meta.is_subaddress()
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
self.meta.payment_id()
}
pub fn is_guaranteed(&self) -> bool {
self.meta.is_guaranteed()
}
}
/// Instantiation of the Address type with Monero's network bytes.
pub type MoneroAddress = Address<MoneroAddressBytes>;
// Allow re-interpreting of an arbitrary address as a monero address so it can be used with the
// rest of this library. Doesn't use From as it was conflicting with From<T> for T.
impl MoneroAddress {
pub fn from<B: AddressBytes>(address: Address<B>) -> MoneroAddress {
MoneroAddress::new(
AddressMeta::new(address.meta.network, address.meta.kind),
address.spend,
address.view,
)
}
}

View File

@@ -1,356 +0,0 @@
use std_shims::{vec::Vec, collections::HashSet};
#[cfg(feature = "cache-distribution")]
use std_shims::sync::OnceLock;
#[cfg(all(feature = "cache-distribution", not(feature = "std")))]
use std_shims::sync::Mutex;
#[cfg(all(feature = "cache-distribution", feature = "std"))]
use async_lock::Mutex;
use zeroize::{Zeroize, ZeroizeOnDrop};
use rand_core::{RngCore, CryptoRng};
use rand_distr::{Distribution, Gamma};
#[cfg(not(feature = "std"))]
use rand_distr::num_traits::Float;
use curve25519_dalek::edwards::EdwardsPoint;
use crate::{
serialize::varint_len,
wallet::SpendableOutput,
rpc::{RpcError, RpcConnection, Rpc},
DEFAULT_LOCK_WINDOW, COINBASE_LOCK_WINDOW, BLOCK_TIME,
};
const RECENT_WINDOW: usize = 15;
const BLOCKS_PER_YEAR: usize = 365 * 24 * 60 * 60 / BLOCK_TIME;
#[allow(clippy::cast_precision_loss)]
const TIP_APPLICATION: f64 = (DEFAULT_LOCK_WINDOW * BLOCK_TIME) as f64;
// TODO: Resolve safety of this in case a reorg occurs/the network changes
// TODO: Update this when scanning a block, as possible
#[cfg(feature = "cache-distribution")]
static DISTRIBUTION_CELL: OnceLock<Mutex<Vec<u64>>> = OnceLock::new();
#[cfg(feature = "cache-distribution")]
#[allow(non_snake_case)]
fn DISTRIBUTION() -> &'static Mutex<Vec<u64>> {
DISTRIBUTION_CELL.get_or_init(|| Mutex::new(Vec::with_capacity(3000000)))
}
#[allow(clippy::too_many_arguments)]
async fn select_n<'a, R: RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc<RPC>,
distribution: &[u64],
height: usize,
high: u64,
per_second: f64,
real: &[u64],
used: &mut HashSet<u64>,
count: usize,
fingerprintable_canonical: bool,
) -> Result<Vec<(u64, [EdwardsPoint; 2])>, RpcError> {
// TODO: consider removing this extra RPC and expect the caller to handle it
if fingerprintable_canonical && height > rpc.get_height().await? {
// TODO: Don't use InternalError for the caller's failure
Err(RpcError::InternalError("decoys being requested from too young blocks"))?;
}
#[cfg(test)]
let mut iters = 0;
let mut confirmed = Vec::with_capacity(count);
// Retries on failure. Retries are obvious as decoys, yet should be minimal
while confirmed.len() != count {
let remaining = count - confirmed.len();
// TODO: over-request candidates in case some are locked to avoid needing
// round trips to the daemon (and revealing obvious decoys to the daemon)
let mut candidates = Vec::with_capacity(remaining);
while candidates.len() != remaining {
#[cfg(test)]
{
iters += 1;
// This is cheap and on fresh chains, a lot of rounds may be needed
if iters == 100 {
Err(RpcError::InternalError("hit decoy selection round limit"))?;
}
}
// Use a gamma distribution
let mut age = Gamma::<f64>::new(19.28, 1.0 / 1.61).unwrap().sample(rng).exp();
#[allow(clippy::cast_precision_loss)]
if age > TIP_APPLICATION {
age -= TIP_APPLICATION;
} else {
// f64 does not have try_from available, which is why these are written with `as`
age = (rng.next_u64() % u64::try_from(RECENT_WINDOW * BLOCK_TIME).unwrap()) as f64;
}
#[allow(clippy::cast_sign_loss, clippy::cast_possible_truncation)]
let o = (age * per_second) as u64;
if o < high {
let i = distribution.partition_point(|s| *s < (high - 1 - o));
let prev = i.saturating_sub(1);
let n = distribution[i] - distribution[prev];
if n != 0 {
let o = distribution[prev] + (rng.next_u64() % n);
if !used.contains(&o) {
// It will either actually be used, or is unusable and this prevents trying it again
used.insert(o);
candidates.push(o);
}
}
}
}
// If this is the first time we're requesting these outputs, include the real one as well
// Prevents the node we're connected to from having a list of known decoys and then seeing a
// TX which uses all of them, with one additional output (the true spend)
let mut real_indexes = HashSet::with_capacity(real.len());
if confirmed.is_empty() {
for real in real {
candidates.push(*real);
}
// Sort candidates so the real spends aren't the ones at the end
candidates.sort();
for real in real {
real_indexes.insert(candidates.binary_search(real).unwrap());
}
}
// TODO: make sure that the real output is included in the response, and
// that mask and key are equal to expected
for (i, output) in rpc
.get_unlocked_outputs(&candidates, height, fingerprintable_canonical)
.await?
.iter_mut()
.enumerate()
{
// Don't include the real spend as a decoy, despite requesting it
if real_indexes.contains(&i) {
continue;
}
if let Some(output) = output.take() {
confirmed.push((candidates[i], output));
}
}
}
Ok(confirmed)
}
fn offset(ring: &[u64]) -> Vec<u64> {
let mut res = vec![ring[0]];
res.resize(ring.len(), 0);
for m in (1 .. ring.len()).rev() {
res[m] = ring[m] - ring[m - 1];
}
res
}
async fn select_decoys<R: RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc<RPC>,
ring_len: usize,
height: usize,
inputs: &[SpendableOutput],
fingerprintable_canonical: bool,
) -> Result<Vec<Decoys>, RpcError> {
#[cfg(feature = "cache-distribution")]
#[cfg(not(feature = "std"))]
let mut distribution = DISTRIBUTION().lock();
#[cfg(feature = "cache-distribution")]
#[cfg(feature = "std")]
let mut distribution = DISTRIBUTION().lock().await;
#[cfg(not(feature = "cache-distribution"))]
let mut distribution = vec![];
let decoy_count = ring_len - 1;
// Convert the inputs in question to the raw output data
let mut real = Vec::with_capacity(inputs.len());
let mut outputs = Vec::with_capacity(inputs.len());
for input in inputs {
real.push(input.global_index);
outputs.push((real[real.len() - 1], [input.key(), input.commitment().calculate()]));
}
if distribution.len() < height {
// TODO: verify distribution elems are strictly increasing
let extension =
rpc.get_output_distribution(distribution.len(), height.saturating_sub(1)).await?;
distribution.extend(extension);
}
// If asked to use an older height than previously asked, truncate to ensure accuracy
// Should never happen, yet risks desyncing if it did
distribution.truncate(height);
if distribution.len() < DEFAULT_LOCK_WINDOW {
Err(RpcError::InternalError("not enough decoy candidates"))?;
}
#[allow(clippy::cast_precision_loss)]
let per_second = {
let blocks = distribution.len().min(BLOCKS_PER_YEAR);
let initial = distribution[distribution.len().saturating_sub(blocks + 1)];
let outputs = distribution[distribution.len() - 1].saturating_sub(initial);
(outputs as f64) / ((blocks * BLOCK_TIME) as f64)
};
let mut used = HashSet::<u64>::new();
for o in &outputs {
used.insert(o.0);
}
// TODO: Create a TX with less than the target amount, as allowed by the protocol
let high = distribution[distribution.len() - DEFAULT_LOCK_WINDOW];
if high.saturating_sub(COINBASE_LOCK_WINDOW as u64) <
u64::try_from(inputs.len() * ring_len).unwrap()
{
Err(RpcError::InternalError("not enough coinbase candidates"))?;
}
// Select all decoys for this transaction, assuming we generate a sane transaction
// We should almost never naturally generate an insane transaction, hence why this doesn't
// bother with an overage
let mut decoys = select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&real,
&mut used,
inputs.len() * decoy_count,
fingerprintable_canonical,
)
.await?;
real.zeroize();
let mut res = Vec::with_capacity(inputs.len());
for o in outputs {
// Grab the decoys for this specific output
let mut ring = decoys.drain((decoys.len() - decoy_count) ..).collect::<Vec<_>>();
ring.push(o);
ring.sort_by(|a, b| a.0.cmp(&b.0));
// Sanity checks are only run when 1000 outputs are available in Monero
// We run this check whenever the highest output index, which we acknowledge, is > 500
// This means we assume (for presumably test blockchains) the height being used has not had
// 500 outputs since while itself not being a sufficiently mature blockchain
// Considering Monero's p2p layer doesn't actually check transaction sanity, it should be
// fine for us to not have perfectly matching rules, especially since this code will infinite
// loop if it can't determine sanity, which is possible with sufficient inputs on
// sufficiently small chains
if high > 500 {
// Make sure the TX passes the sanity check that the median output is within the last 40%
let target_median = high * 3 / 5;
while ring[ring_len / 2].0 < target_median {
// If it's not, update the bottom half with new values to ensure the median only moves up
for removed in ring.drain(0 .. (ring_len / 2)).collect::<Vec<_>>() {
// If we removed the real spend, add it back
if removed.0 == o.0 {
ring.push(o);
} else {
// We could not remove this, saving CPU time and removing low values as
// possibilities, yet it'd increase the amount of decoys required to create this
// transaction and some removed outputs may be the best option (as we drop the first
// half, not just the bottom n)
used.remove(&removed.0);
}
}
// Select new outputs until we have a full sized ring again
ring.extend(
select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&[],
&mut used,
ring_len - ring.len(),
fingerprintable_canonical,
)
.await?,
);
ring.sort_by(|a, b| a.0.cmp(&b.0));
}
// The other sanity check rule is about duplicates, yet we already enforce unique ring
// members
}
res.push(Decoys {
// Binary searches for the real spend since we don't know where it sorted to
i: u8::try_from(ring.partition_point(|x| x.0 < o.0)).unwrap(),
offsets: offset(&ring.iter().map(|output| output.0).collect::<Vec<_>>()),
ring: ring.iter().map(|output| output.1).collect(),
});
}
Ok(res)
}
/// Decoy data, containing the actual member as well (at index `i`).
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct Decoys {
pub(crate) i: u8,
pub(crate) offsets: Vec<u64>,
pub(crate) ring: Vec<[EdwardsPoint; 2]>,
}
#[allow(clippy::len_without_is_empty)]
impl Decoys {
pub fn fee_weight(offsets: &[u64]) -> usize {
varint_len(offsets.len()) + offsets.iter().map(|offset| varint_len(*offset)).sum::<usize>()
}
pub fn len(&self) -> usize {
self.offsets.len()
}
pub fn indexes(&self) -> Vec<u64> {
let mut res = vec![self.offsets[0]; self.len()];
for m in 1 .. res.len() {
res[m] = res[m - 1] + self.offsets[m];
}
res
}
/// Select decoys using the same distribution as Monero. Relies on the monerod RPC
/// response for an output's unlocked status, minimizing trips to the daemon.
pub async fn select<R: RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc<RPC>,
ring_len: usize,
height: usize,
inputs: &[SpendableOutput],
) -> Result<Vec<Decoys>, RpcError> {
select_decoys(rng, rpc, ring_len, height, inputs, false).await
}
/// If no reorg has occurred and an honest RPC, any caller who passes the same height to this
/// function will use the same distribution to select decoys. It is fingerprintable
/// because a caller using this will not be able to select decoys that are timelocked
/// with a timestamp. Any transaction which includes timestamp timelocked decoys in its
/// rings could not be constructed using this function.
///
/// TODO: upstream change to monerod get_outs RPC to accept a height param for checking
/// output's unlocked status and remove all usage of fingerprintable_canonical
pub async fn fingerprintable_canonical_select<R: RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc<RPC>,
ring_len: usize,
height: usize,
inputs: &[SpendableOutput],
) -> Result<Vec<Decoys>, RpcError> {
select_decoys(rng, rpc, ring_len, height, inputs, true).await
}
}

View File

@@ -1,268 +0,0 @@
use core::ops::Deref;
use std_shims::collections::{HashSet, HashMap};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
edwards::{EdwardsPoint, CompressedEdwardsY},
};
use crate::{
hash, hash_to_scalar, serialize::write_varint, ringct::EncryptedAmount, transaction::Input,
};
pub mod extra;
pub(crate) use extra::{PaymentId, ExtraField, Extra};
/// Seed creation and parsing functionality.
pub mod seed;
/// Address encoding and decoding functionality.
pub mod address;
use address::{Network, AddressType, SubaddressIndex, AddressSpec, AddressMeta, MoneroAddress};
mod scan;
pub use scan::{ReceivedOutput, SpendableOutput, Timelocked};
pub mod decoys;
pub use decoys::Decoys;
mod send;
pub use send::{FeePriority, Fee, TransactionError, Change, SignableTransaction, Eventuality};
#[cfg(feature = "std")]
pub use send::SignableTransactionBuilder;
#[cfg(feature = "multisig")]
pub(crate) use send::InternalPayment;
#[cfg(feature = "multisig")]
pub use send::TransactionMachine;
fn key_image_sort(x: &EdwardsPoint, y: &EdwardsPoint) -> core::cmp::Ordering {
x.compress().to_bytes().cmp(&y.compress().to_bytes()).reverse()
}
// https://gist.github.com/kayabaNerve/8066c13f1fe1573286ba7a2fd79f6100
pub(crate) fn uniqueness(inputs: &[Input]) -> [u8; 32] {
let mut u = b"uniqueness".to_vec();
for input in inputs {
match input {
// If Gen, this should be the only input, making this loop somewhat pointless
// This works and even if there were somehow multiple inputs, it'd be a false negative
Input::Gen(height) => {
write_varint(height, &mut u).unwrap();
}
Input::ToKey { key_image, .. } => u.extend(key_image.compress().to_bytes()),
}
}
hash(&u)
}
// Hs("view_tag" || 8Ra || o), Hs(8Ra || o), and H(8Ra || 0x8d) with uniqueness inclusion in the
// Scalar as an option
#[allow(non_snake_case)]
pub(crate) fn shared_key(
uniqueness: Option<[u8; 32]>,
ecdh: EdwardsPoint,
o: usize,
) -> (u8, Scalar, [u8; 8]) {
// 8Ra
let mut output_derivation = ecdh.mul_by_cofactor().compress().to_bytes().to_vec();
let mut payment_id_xor = [0; 8];
payment_id_xor
.copy_from_slice(&hash(&[output_derivation.as_ref(), [0x8d].as_ref()].concat())[.. 8]);
// || o
write_varint(&o, &mut output_derivation).unwrap();
let view_tag = hash(&[b"view_tag".as_ref(), &output_derivation].concat())[0];
// uniqueness ||
let shared_key = if let Some(uniqueness) = uniqueness {
[uniqueness.as_ref(), &output_derivation].concat()
} else {
output_derivation
};
(view_tag, hash_to_scalar(&shared_key), payment_id_xor)
}
pub(crate) fn commitment_mask(shared_key: Scalar) -> Scalar {
let mut mask = b"commitment_mask".to_vec();
mask.extend(shared_key.to_bytes());
hash_to_scalar(&mask)
}
pub(crate) fn amount_encryption(amount: u64, key: Scalar) -> [u8; 8] {
let mut amount_mask = b"amount".to_vec();
amount_mask.extend(key.to_bytes());
(amount ^ u64::from_le_bytes(hash(&amount_mask)[.. 8].try_into().unwrap())).to_le_bytes()
}
// TODO: Move this under EncryptedAmount?
fn amount_decryption(amount: &EncryptedAmount, key: Scalar) -> (Scalar, u64) {
match amount {
EncryptedAmount::Original { mask, amount } => {
#[cfg(feature = "experimental")]
{
let mask_shared_sec = hash(key.as_bytes());
let mask =
Scalar::from_bytes_mod_order(*mask) - Scalar::from_bytes_mod_order(mask_shared_sec);
let amount_shared_sec = hash(&mask_shared_sec);
let amount_scalar =
Scalar::from_bytes_mod_order(*amount) - Scalar::from_bytes_mod_order(amount_shared_sec);
// d2b from rctTypes.cpp
let amount = u64::from_le_bytes(amount_scalar.to_bytes()[0 .. 8].try_into().unwrap());
(mask, amount)
}
#[cfg(not(feature = "experimental"))]
{
let _ = mask;
let _ = amount;
todo!("decrypting a legacy monero transaction's amount")
}
}
EncryptedAmount::Compact { amount } => (
commitment_mask(key),
u64::from_le_bytes(amount_encryption(u64::from_le_bytes(*amount), key)),
),
}
}
/// The private view key and public spend key, enabling scanning transactions.
#[derive(Clone, Zeroize, ZeroizeOnDrop)]
pub struct ViewPair {
spend: EdwardsPoint,
view: Zeroizing<Scalar>,
}
impl ViewPair {
pub fn new(spend: EdwardsPoint, view: Zeroizing<Scalar>) -> ViewPair {
ViewPair { spend, view }
}
pub fn spend(&self) -> EdwardsPoint {
self.spend
}
pub fn view(&self) -> EdwardsPoint {
self.view.deref() * ED25519_BASEPOINT_TABLE
}
fn subaddress_derivation(&self, index: SubaddressIndex) -> Scalar {
hash_to_scalar(&Zeroizing::new(
[
b"SubAddr\0".as_ref(),
Zeroizing::new(self.view.to_bytes()).as_ref(),
&index.account().to_le_bytes(),
&index.address().to_le_bytes(),
]
.concat(),
))
}
fn subaddress_keys(&self, index: SubaddressIndex) -> (EdwardsPoint, EdwardsPoint) {
let scalar = self.subaddress_derivation(index);
let spend = self.spend + (&scalar * ED25519_BASEPOINT_TABLE);
let view = self.view.deref() * spend;
(spend, view)
}
/// Returns an address with the provided specification.
pub fn address(&self, network: Network, spec: AddressSpec) -> MoneroAddress {
let mut spend = self.spend;
let mut view: EdwardsPoint = self.view.deref() * ED25519_BASEPOINT_TABLE;
// construct the address meta
let meta = match spec {
AddressSpec::Standard => AddressMeta::new(network, AddressType::Standard),
AddressSpec::Integrated(payment_id) => {
AddressMeta::new(network, AddressType::Integrated(payment_id))
}
AddressSpec::Subaddress(index) => {
(spend, view) = self.subaddress_keys(index);
AddressMeta::new(network, AddressType::Subaddress)
}
AddressSpec::Featured { subaddress, payment_id, guaranteed } => {
if let Some(index) = subaddress {
(spend, view) = self.subaddress_keys(index);
}
AddressMeta::new(
network,
AddressType::Featured { subaddress: subaddress.is_some(), payment_id, guaranteed },
)
}
};
MoneroAddress::new(meta, spend, view)
}
}
/// Transaction scanner.
/// This scanner is capable of generating subaddresses, additionally scanning for them once they've
/// been explicitly generated. If the burning bug is attempted, any secondary outputs will be
/// ignored.
#[derive(Clone)]
pub struct Scanner {
pair: ViewPair,
// Also contains the spend key as None
pub(crate) subaddresses: HashMap<CompressedEdwardsY, Option<SubaddressIndex>>,
pub(crate) burning_bug: Option<HashSet<CompressedEdwardsY>>,
}
impl Zeroize for Scanner {
fn zeroize(&mut self) {
self.pair.zeroize();
// These may not be effective, unfortunately
for (mut key, mut value) in self.subaddresses.drain() {
key.zeroize();
value.zeroize();
}
if let Some(ref mut burning_bug) = self.burning_bug.take() {
for mut output in burning_bug.drain() {
output.zeroize();
}
}
}
}
impl Drop for Scanner {
fn drop(&mut self) {
self.zeroize();
}
}
impl ZeroizeOnDrop for Scanner {}
impl Scanner {
/// Create a Scanner from a ViewPair.
///
/// burning_bug is a HashSet of used keys, intended to prevent key reuse which would burn funds.
///
/// When an output is successfully scanned, the output key MUST be saved to disk.
///
/// When a new scanner is created, ALL saved output keys must be passed in to be secure.
///
/// If None is passed, a modified shared key derivation is used which is immune to the burning
/// bug (specifically the Guaranteed feature from Featured Addresses).
pub fn from_view(pair: ViewPair, burning_bug: Option<HashSet<CompressedEdwardsY>>) -> Scanner {
let mut subaddresses = HashMap::new();
subaddresses.insert(pair.spend.compress(), None);
Scanner { pair, subaddresses, burning_bug }
}
/// Register a subaddress.
// There used to be an address function here, yet it wasn't safe. It could generate addresses
// incompatible with the Scanner. While we could return None for that, then we have the issue
// of runtime failures to generate an address.
// Removing that API was the simplest option.
pub fn register_subaddress(&mut self, subaddress: SubaddressIndex) {
let (spend, _) = self.pair.subaddress_keys(subaddress);
self.subaddresses.insert(spend.compress(), Some(subaddress));
}
}

View File

@@ -1,521 +0,0 @@
use core::ops::Deref;
use std_shims::{
vec::Vec,
string::ToString,
io::{self, Read, Write},
};
use zeroize::{Zeroize, ZeroizeOnDrop};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
use monero_generators::decompress_point;
use crate::{
Commitment,
serialize::{read_byte, read_u32, read_u64, read_bytes, read_scalar, read_point, read_raw_vec},
transaction::{Input, Timelock, Transaction},
block::Block,
rpc::{RpcError, RpcConnection, Rpc},
wallet::{
PaymentId, Extra, address::SubaddressIndex, Scanner, uniqueness, shared_key, amount_decryption,
},
};
/// An absolute output ID, defined as its transaction hash and output index.
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub struct AbsoluteId {
pub tx: [u8; 32],
pub o: u8,
}
impl core::fmt::Debug for AbsoluteId {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt.debug_struct("AbsoluteId").field("tx", &hex::encode(self.tx)).field("o", &self.o).finish()
}
}
impl AbsoluteId {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.tx)?;
w.write_all(&[self.o])
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = Vec::with_capacity(32 + 1);
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<AbsoluteId> {
Ok(AbsoluteId { tx: read_bytes(r)?, o: read_byte(r)? })
}
}
/// The data contained with an output.
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub struct OutputData {
pub key: EdwardsPoint,
/// Absolute difference between the spend key and the key in this output
pub key_offset: Scalar,
pub commitment: Commitment,
}
impl core::fmt::Debug for OutputData {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt
.debug_struct("OutputData")
.field("key", &hex::encode(self.key.compress().0))
.field("key_offset", &hex::encode(self.key_offset.to_bytes()))
.field("commitment", &self.commitment)
.finish()
}
}
impl OutputData {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.key.compress().to_bytes())?;
w.write_all(&self.key_offset.to_bytes())?;
w.write_all(&self.commitment.mask.to_bytes())?;
w.write_all(&self.commitment.amount.to_le_bytes())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = Vec::with_capacity(32 + 32 + 32 + 8);
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<OutputData> {
Ok(OutputData {
key: read_point(r)?,
key_offset: read_scalar(r)?,
commitment: Commitment::new(read_scalar(r)?, read_u64(r)?),
})
}
}
/// The metadata for an output.
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub struct Metadata {
/// The subaddress this output was sent to.
pub subaddress: Option<SubaddressIndex>,
/// The payment ID included with this output.
/// There are 2 circumstances in which the reference wallet2 ignores the payment ID
/// but the payment ID will be returned here anyway:
///
/// 1) If the payment ID is tied to an output received by a subaddress account
/// that spent Monero in the transaction (the received output is considered
/// "change" and is not considered a "payment" in this case). If there are multiple
/// spending subaddress accounts in a transaction, the highest index spent key image
/// is used to determine the spending subaddress account.
///
/// 2) If the payment ID is the unencrypted variant and the block's hf version is
/// v12 or higher (https://github.com/serai-dex/serai/issues/512)
pub payment_id: Option<PaymentId>,
/// Arbitrary data encoded in TX extra.
pub arbitrary_data: Vec<Vec<u8>>,
}
impl core::fmt::Debug for Metadata {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt
.debug_struct("Metadata")
.field("subaddress", &self.subaddress)
.field("payment_id", &self.payment_id)
.field("arbitrary_data", &self.arbitrary_data.iter().map(hex::encode).collect::<Vec<_>>())
.finish()
}
}
impl Metadata {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
if let Some(subaddress) = self.subaddress {
w.write_all(&[1])?;
w.write_all(&subaddress.account().to_le_bytes())?;
w.write_all(&subaddress.address().to_le_bytes())?;
} else {
w.write_all(&[0])?;
}
if let Some(payment_id) = self.payment_id {
w.write_all(&[1])?;
payment_id.write(w)?;
} else {
w.write_all(&[0])?;
}
w.write_all(&u32::try_from(self.arbitrary_data.len()).unwrap().to_le_bytes())?;
for part in &self.arbitrary_data {
w.write_all(&[u8::try_from(part.len()).unwrap()])?;
w.write_all(part)?;
}
Ok(())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = Vec::with_capacity(1 + 8 + 1);
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Metadata> {
let subaddress = if read_byte(r)? == 1 {
Some(
SubaddressIndex::new(read_u32(r)?, read_u32(r)?)
.ok_or_else(|| io::Error::other("invalid subaddress in metadata"))?,
)
} else {
None
};
Ok(Metadata {
subaddress,
payment_id: if read_byte(r)? == 1 { PaymentId::read(r).ok() } else { None },
arbitrary_data: {
let mut data = vec![];
for _ in 0 .. read_u32(r)? {
let len = read_byte(r)?;
data.push(read_raw_vec(read_byte, usize::from(len), r)?);
}
data
},
})
}
}
/// A received output, defined as its absolute ID, data, and metadara.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ReceivedOutput {
pub absolute: AbsoluteId,
pub data: OutputData,
pub metadata: Metadata,
}
impl ReceivedOutput {
pub fn key(&self) -> EdwardsPoint {
self.data.key
}
pub fn key_offset(&self) -> Scalar {
self.data.key_offset
}
pub fn commitment(&self) -> Commitment {
self.data.commitment.clone()
}
pub fn arbitrary_data(&self) -> &[Vec<u8>] {
&self.metadata.arbitrary_data
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.absolute.write(w)?;
self.data.write(w)?;
self.metadata.write(w)
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<ReceivedOutput> {
Ok(ReceivedOutput {
absolute: AbsoluteId::read(r)?,
data: OutputData::read(r)?,
metadata: Metadata::read(r)?,
})
}
}
/// A spendable output, defined as a received output and its index on the Monero blockchain.
/// This index is dependent on the Monero blockchain and will only be known once the output is
/// included within a block. This may change if there's a reorganization.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct SpendableOutput {
pub output: ReceivedOutput,
pub global_index: u64,
}
impl SpendableOutput {
/// Update the spendable output's global index. This is intended to be called if a
/// re-organization occurred.
pub async fn refresh_global_index<RPC: RpcConnection>(
&mut self,
rpc: &Rpc<RPC>,
) -> Result<(), RpcError> {
self.global_index = *rpc
.get_o_indexes(self.output.absolute.tx)
.await?
.get(usize::from(self.output.absolute.o))
.ok_or(RpcError::InvalidNode(
"node returned output indexes didn't include an index for this output".to_string(),
))?;
Ok(())
}
pub async fn from<RPC: RpcConnection>(
rpc: &Rpc<RPC>,
output: ReceivedOutput,
) -> Result<SpendableOutput, RpcError> {
let mut output = SpendableOutput { output, global_index: 0 };
output.refresh_global_index(rpc).await?;
Ok(output)
}
pub fn key(&self) -> EdwardsPoint {
self.output.key()
}
pub fn key_offset(&self) -> Scalar {
self.output.key_offset()
}
pub fn commitment(&self) -> Commitment {
self.output.commitment()
}
pub fn arbitrary_data(&self) -> &[Vec<u8>] {
self.output.arbitrary_data()
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.output.write(w)?;
w.write_all(&self.global_index.to_le_bytes())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<SpendableOutput> {
Ok(SpendableOutput { output: ReceivedOutput::read(r)?, global_index: read_u64(r)? })
}
}
/// A collection of timelocked outputs, either received or spendable.
#[derive(Zeroize)]
pub struct Timelocked<O: Clone + Zeroize>(Timelock, Vec<O>);
impl<O: Clone + Zeroize> Drop for Timelocked<O> {
fn drop(&mut self) {
self.zeroize();
}
}
impl<O: Clone + Zeroize> ZeroizeOnDrop for Timelocked<O> {}
impl<O: Clone + Zeroize> Timelocked<O> {
pub fn timelock(&self) -> Timelock {
self.0
}
/// Return the outputs if they're not timelocked, or an empty vector if they are.
#[must_use]
pub fn not_locked(&self) -> Vec<O> {
if self.0 == Timelock::None {
return self.1.clone();
}
vec![]
}
/// Returns None if the Timelocks aren't comparable. Returns Some(vec![]) if none are unlocked.
#[must_use]
pub fn unlocked(&self, timelock: Timelock) -> Option<Vec<O>> {
// If the Timelocks are comparable, return the outputs if they're now unlocked
if self.0 <= timelock {
Some(self.1.clone())
} else {
None
}
}
#[must_use]
pub fn ignore_timelock(&self) -> Vec<O> {
self.1.clone()
}
}
impl Scanner {
/// Scan a transaction to discover the received outputs.
pub fn scan_transaction(&mut self, tx: &Transaction) -> Timelocked<ReceivedOutput> {
// Only scan RCT TXs since we can only spend RCT outputs
if tx.prefix.version != 2 {
return Timelocked(tx.prefix.timelock, vec![]);
}
let Ok(extra) = Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref()) else {
return Timelocked(tx.prefix.timelock, vec![]);
};
let Some((tx_keys, additional)) = extra.keys() else {
return Timelocked(tx.prefix.timelock, vec![]);
};
let payment_id = extra.payment_id();
let mut res = vec![];
for (o, output) in tx.prefix.outputs.iter().enumerate() {
// https://github.com/serai-dex/serai/issues/106
if let Some(burning_bug) = self.burning_bug.as_ref() {
if burning_bug.contains(&output.key) {
continue;
}
}
let output_key = decompress_point(output.key.to_bytes());
if output_key.is_none() {
continue;
}
let output_key = output_key.unwrap();
let additional = additional.as_ref().map(|additional| additional.get(o));
for key in tx_keys.iter().map(|key| Some(Some(key))).chain(core::iter::once(additional)) {
let key = match key {
Some(Some(key)) => key,
Some(None) => {
// This is non-standard. There were additional keys, yet not one for this output
// https://github.com/monero-project/monero/
// blob/04a1e2875d6e35e27bb21497988a6c822d319c28/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L1062
continue;
}
None => {
break;
}
};
let (view_tag, shared_key, payment_id_xor) = shared_key(
if self.burning_bug.is_none() { Some(uniqueness(&tx.prefix.inputs)) } else { None },
self.pair.view.deref() * key,
o,
);
let payment_id = payment_id.map(|id| id ^ payment_id_xor);
if let Some(actual_view_tag) = output.view_tag {
if actual_view_tag != view_tag {
continue;
}
}
// P - shared == spend
let subaddress =
self.subaddresses.get(&(output_key - (&shared_key * ED25519_BASEPOINT_TABLE)).compress());
if subaddress.is_none() {
continue;
}
let subaddress = *subaddress.unwrap();
// If it has torsion, it'll subtract the non-torsioned shared key to a torsioned key
// We will not have a torsioned key in our HashMap of keys, so we wouldn't identify it as
// ours
// If we did though, it'd enable bypassing the included burning bug protection
assert!(output_key.is_torsion_free());
let mut key_offset = shared_key;
if let Some(subaddress) = subaddress {
key_offset += self.pair.subaddress_derivation(subaddress);
}
// Since we've found an output to us, get its amount
let mut commitment = Commitment::zero();
// Miner transaction
if let Some(amount) = output.amount {
commitment.amount = amount;
// Regular transaction
} else {
let (mask, amount) = match tx.rct_signatures.base.encrypted_amounts.get(o) {
Some(amount) => amount_decryption(amount, shared_key),
// This should never happen, yet it may be possible with miner transactions?
// Using get just decreases the possibility of a panic and lets us move on in that case
None => break,
};
// Rebuild the commitment to verify it
commitment = Commitment::new(mask, amount);
// If this is a malicious commitment, move to the next output
// Any other R value will calculate to a different spend key and are therefore ignorable
if Some(&commitment.calculate()) != tx.rct_signatures.base.commitments.get(o) {
break;
}
}
if commitment.amount != 0 {
res.push(ReceivedOutput {
absolute: AbsoluteId { tx: tx.hash(), o: o.try_into().unwrap() },
data: OutputData { key: output_key, key_offset, commitment },
metadata: Metadata { subaddress, payment_id, arbitrary_data: extra.data() },
});
if let Some(burning_bug) = self.burning_bug.as_mut() {
burning_bug.insert(output.key);
}
}
// Break to prevent public keys from being included multiple times, triggering multiple
// inclusions of the same output
break;
}
}
Timelocked(tx.prefix.timelock, res)
}
/// Scan a block to obtain its spendable outputs. Its the presence in a block giving these
/// transactions their global index, and this must be batched as asking for the index of specific
/// transactions is a dead giveaway for which transactions you successfully scanned. This
/// function obtains the output indexes for the miner transaction, incrementing from there
/// instead.
pub async fn scan<RPC: RpcConnection>(
&mut self,
rpc: &Rpc<RPC>,
block: &Block,
) -> Result<Vec<Timelocked<SpendableOutput>>, RpcError> {
let mut index = rpc.get_o_indexes(block.miner_tx.hash()).await?[0];
let mut txs = vec![block.miner_tx.clone()];
txs.extend(rpc.get_transactions(&block.txs).await?);
let map = |mut timelock: Timelocked<ReceivedOutput>, index| {
if timelock.1.is_empty() {
None
} else {
Some(Timelocked(
timelock.0,
timelock
.1
.drain(..)
.map(|output| SpendableOutput {
global_index: index + u64::from(output.absolute.o),
output,
})
.collect(),
))
}
};
let mut res = vec![];
for tx in txs {
if let Some(timelock) = map(self.scan_transaction(&tx), index) {
res.push(timelock);
}
index += u64::try_from(
tx.prefix
.outputs
.iter()
// Filter to v2 miner TX outputs/RCT outputs since we're tracking the RCT output index
.filter(|output| {
let is_v2_miner_tx =
(tx.prefix.version == 2) && matches!(tx.prefix.inputs.first(), Some(Input::Gen(..)));
is_v2_miner_tx || output.amount.is_none()
})
.count(),
)
.unwrap()
}
Ok(res)
}
}

View File

@@ -1,136 +0,0 @@
use core::fmt;
use std_shims::string::String;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use rand_core::{RngCore, CryptoRng};
pub(crate) mod classic;
pub(crate) mod polyseed;
use classic::{CLASSIC_SEED_LENGTH, CLASSIC_SEED_LENGTH_WITH_CHECKSUM, ClassicSeed};
use polyseed::{POLYSEED_LENGTH, Polyseed};
/// Error when decoding a seed.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum SeedError {
#[cfg_attr(feature = "std", error("invalid number of words in seed"))]
InvalidSeedLength,
#[cfg_attr(feature = "std", error("unknown language"))]
UnknownLanguage,
#[cfg_attr(feature = "std", error("invalid checksum"))]
InvalidChecksum,
#[cfg_attr(feature = "std", error("english old seeds don't support checksums"))]
EnglishOldWithChecksum,
#[cfg_attr(feature = "std", error("provided entropy is not valid"))]
InvalidEntropy,
#[cfg_attr(feature = "std", error("invalid seed"))]
InvalidSeed,
#[cfg_attr(feature = "std", error("provided features are not supported"))]
UnsupportedFeatures,
}
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
pub enum SeedType {
Classic(classic::Language),
Polyseed(polyseed::Language),
}
/// A Monero seed.
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub enum Seed {
Classic(ClassicSeed),
Polyseed(Polyseed),
}
impl fmt::Debug for Seed {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Seed::Classic(_) => f.debug_struct("Seed::Classic").finish_non_exhaustive(),
Seed::Polyseed(_) => f.debug_struct("Seed::Polyseed").finish_non_exhaustive(),
}
}
}
impl Seed {
/// Creates a new `Seed`.
pub fn new<R: RngCore + CryptoRng>(rng: &mut R, seed_type: SeedType) -> Seed {
match seed_type {
SeedType::Classic(lang) => Seed::Classic(ClassicSeed::new(rng, lang)),
SeedType::Polyseed(lang) => Seed::Polyseed(Polyseed::new(rng, lang)),
}
}
/// Parse a seed from a `String`.
pub fn from_string(seed_type: SeedType, words: Zeroizing<String>) -> Result<Seed, SeedError> {
let word_count = words.split_whitespace().count();
match seed_type {
SeedType::Classic(lang) => {
if word_count != CLASSIC_SEED_LENGTH && word_count != CLASSIC_SEED_LENGTH_WITH_CHECKSUM {
Err(SeedError::InvalidSeedLength)?
} else {
ClassicSeed::from_string(lang, words).map(Seed::Classic)
}
}
SeedType::Polyseed(lang) => {
if word_count != POLYSEED_LENGTH {
Err(SeedError::InvalidSeedLength)?
} else {
Polyseed::from_string(lang, words).map(Seed::Polyseed)
}
}
}
}
/// Creates a `Seed` from an entropy and an optional birthday (denoted in seconds since the
/// epoch).
///
/// For `SeedType::Classic`, the birthday is ignored.
///
/// For `SeedType::Polyseed`, the last 13 bytes of `entropy` must be `0`.
// TODO: Return Result, not Option
pub fn from_entropy(
seed_type: SeedType,
entropy: Zeroizing<[u8; 32]>,
birthday: Option<u64>,
) -> Option<Seed> {
match seed_type {
SeedType::Classic(lang) => ClassicSeed::from_entropy(lang, entropy).map(Seed::Classic),
SeedType::Polyseed(lang) => {
Polyseed::from(lang, 0, birthday.unwrap_or(0), entropy).map(Seed::Polyseed).ok()
}
}
}
/// Returns seed as `String`.
pub fn to_string(&self) -> Zeroizing<String> {
match self {
Seed::Classic(seed) => seed.to_string(),
Seed::Polyseed(seed) => seed.to_string(),
}
}
/// Returns the entropy for this seed.
pub fn entropy(&self) -> Zeroizing<[u8; 32]> {
match self {
Seed::Classic(seed) => seed.entropy(),
Seed::Polyseed(seed) => seed.entropy().clone(),
}
}
/// Returns the key derived from this seed.
pub fn key(&self) -> Zeroizing<[u8; 32]> {
match self {
// Classic does not differentiate between its entropy and its key
Seed::Classic(seed) => seed.entropy(),
Seed::Polyseed(seed) => seed.key(),
}
}
/// Returns the birthday of this seed.
pub fn birthday(&self) -> u64 {
match self {
Seed::Classic(_) => 0,
Seed::Polyseed(seed) => seed.birthday(),
}
}
}

View File

@@ -1,144 +0,0 @@
use std::sync::{Arc, RwLock};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use crate::{
Protocol,
wallet::{
address::MoneroAddress, Fee, SpendableOutput, Change, Decoys, SignableTransaction,
TransactionError, extra::MAX_ARBITRARY_DATA_SIZE,
},
};
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
struct SignableTransactionBuilderInternal {
protocol: Protocol,
fee_rate: Fee,
r_seed: Option<Zeroizing<[u8; 32]>>,
inputs: Vec<(SpendableOutput, Decoys)>,
payments: Vec<(MoneroAddress, u64)>,
change_address: Change,
data: Vec<Vec<u8>>,
}
impl SignableTransactionBuilderInternal {
// Takes in the change address so users don't miss that they have to manually set one
// If they don't, all leftover funds will become part of the fee
fn new(protocol: Protocol, fee_rate: Fee, change_address: Change) -> Self {
Self {
protocol,
fee_rate,
r_seed: None,
inputs: vec![],
payments: vec![],
change_address,
data: vec![],
}
}
fn set_r_seed(&mut self, r_seed: Zeroizing<[u8; 32]>) {
self.r_seed = Some(r_seed);
}
fn add_input(&mut self, input: (SpendableOutput, Decoys)) {
self.inputs.push(input);
}
fn add_inputs(&mut self, inputs: &[(SpendableOutput, Decoys)]) {
self.inputs.extend(inputs.iter().cloned());
}
fn add_payment(&mut self, dest: MoneroAddress, amount: u64) {
self.payments.push((dest, amount));
}
fn add_payments(&mut self, payments: &[(MoneroAddress, u64)]) {
self.payments.extend(payments);
}
fn add_data(&mut self, data: Vec<u8>) {
self.data.push(data);
}
}
/// A Transaction Builder for Monero transactions.
/// All methods provided will modify self while also returning a shallow copy, enabling efficient
/// chaining with a clean API.
/// In order to fork the builder at some point, clone will still return a deep copy.
#[derive(Debug)]
pub struct SignableTransactionBuilder(Arc<RwLock<SignableTransactionBuilderInternal>>);
impl Clone for SignableTransactionBuilder {
fn clone(&self) -> Self {
Self(Arc::new(RwLock::new((*self.0.read().unwrap()).clone())))
}
}
impl PartialEq for SignableTransactionBuilder {
fn eq(&self, other: &Self) -> bool {
*self.0.read().unwrap() == *other.0.read().unwrap()
}
}
impl Eq for SignableTransactionBuilder {}
impl Zeroize for SignableTransactionBuilder {
fn zeroize(&mut self) {
self.0.write().unwrap().zeroize()
}
}
impl SignableTransactionBuilder {
fn shallow_copy(&self) -> Self {
Self(self.0.clone())
}
pub fn new(protocol: Protocol, fee_rate: Fee, change_address: Change) -> Self {
Self(Arc::new(RwLock::new(SignableTransactionBuilderInternal::new(
protocol,
fee_rate,
change_address,
))))
}
pub fn set_r_seed(&mut self, r_seed: Zeroizing<[u8; 32]>) -> Self {
self.0.write().unwrap().set_r_seed(r_seed);
self.shallow_copy()
}
pub fn add_input(&mut self, input: (SpendableOutput, Decoys)) -> Self {
self.0.write().unwrap().add_input(input);
self.shallow_copy()
}
pub fn add_inputs(&mut self, inputs: &[(SpendableOutput, Decoys)]) -> Self {
self.0.write().unwrap().add_inputs(inputs);
self.shallow_copy()
}
pub fn add_payment(&mut self, dest: MoneroAddress, amount: u64) -> Self {
self.0.write().unwrap().add_payment(dest, amount);
self.shallow_copy()
}
pub fn add_payments(&mut self, payments: &[(MoneroAddress, u64)]) -> Self {
self.0.write().unwrap().add_payments(payments);
self.shallow_copy()
}
pub fn add_data(&mut self, data: Vec<u8>) -> Result<Self, TransactionError> {
if data.len() > MAX_ARBITRARY_DATA_SIZE {
Err(TransactionError::TooMuchData)?;
}
self.0.write().unwrap().add_data(data);
Ok(self.shallow_copy())
}
pub fn build(self) -> Result<SignableTransaction, TransactionError> {
let read = self.0.read().unwrap();
SignableTransaction::new(
read.protocol,
read.r_seed.clone(),
read.inputs.clone(),
read.payments.clone(),
&read.change_address,
read.data.clone(),
read.fee_rate,
)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,425 +0,0 @@
use std_shims::{
vec::Vec,
io::{self, Read},
collections::HashMap,
};
use std::sync::{Arc, RwLock};
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use group::ff::Field;
use curve25519_dalek::{traits::Identity, scalar::Scalar, edwards::EdwardsPoint};
use dalek_ff_group as dfg;
use transcript::{Transcript, RecommendedTranscript};
use frost::{
curve::Ed25519,
Participant, FrostError, ThresholdKeys,
sign::{
Writable, Preprocess, CachedPreprocess, SignatureShare, PreprocessMachine, SignMachine,
SignatureMachine, AlgorithmMachine, AlgorithmSignMachine, AlgorithmSignatureMachine,
},
};
use crate::{
random_scalar,
ringct::{
clsag::{ClsagInput, ClsagDetails, ClsagAddendum, ClsagMultisig, add_key_image_share},
RctPrunable,
},
transaction::{Input, Transaction},
wallet::{TransactionError, InternalPayment, SignableTransaction, key_image_sort, uniqueness},
};
/// FROST signing machine to produce a signed transaction.
pub struct TransactionMachine {
signable: SignableTransaction,
i: Participant,
transcript: RecommendedTranscript,
// Hashed key and scalar offset
key_images: Vec<(EdwardsPoint, Scalar)>,
inputs: Vec<Arc<RwLock<Option<ClsagDetails>>>>,
clsags: Vec<AlgorithmMachine<Ed25519, ClsagMultisig>>,
}
pub struct TransactionSignMachine {
signable: SignableTransaction,
i: Participant,
transcript: RecommendedTranscript,
key_images: Vec<(EdwardsPoint, Scalar)>,
inputs: Vec<Arc<RwLock<Option<ClsagDetails>>>>,
clsags: Vec<AlgorithmSignMachine<Ed25519, ClsagMultisig>>,
our_preprocess: Vec<Preprocess<Ed25519, ClsagAddendum>>,
}
pub struct TransactionSignatureMachine {
tx: Transaction,
clsags: Vec<AlgorithmSignatureMachine<Ed25519, ClsagMultisig>>,
}
impl SignableTransaction {
/// Create a FROST signing machine out of this signable transaction.
/// The height is the Monero blockchain height to synchronize around.
pub fn multisig(
self,
keys: &ThresholdKeys<Ed25519>,
mut transcript: RecommendedTranscript,
) -> Result<TransactionMachine, TransactionError> {
let mut inputs = vec![];
for _ in 0 .. self.inputs.len() {
// Doesn't resize as that will use a single Rc for the entire Vec
inputs.push(Arc::new(RwLock::new(None)));
}
let mut clsags = vec![];
// Create a RNG out of the input shared keys, which either requires the view key or being every
// sender, and the payments (address and amount), which a passive adversary may be able to know
// depending on how these transactions are coordinated
// Being every sender would already let you note rings which happen to use your transactions
// multiple times, already breaking privacy there
transcript.domain_separate(b"monero_transaction");
// Also include the spend_key as below only the key offset is included, so this transcripts the
// sum product
// Useful as transcripting the sum product effectively transcripts the key image, further
// guaranteeing the one time properties noted below
transcript.append_message(b"spend_key", keys.group_key().0.compress().to_bytes());
if let Some(r_seed) = &self.r_seed {
transcript.append_message(b"r_seed", r_seed);
}
for (input, decoys) in &self.inputs {
// These outputs can only be spent once. Therefore, it forces all RNGs derived from this
// transcript (such as the one used to create one time keys) to be unique
transcript.append_message(b"input_hash", input.output.absolute.tx);
transcript.append_message(b"input_output_index", [input.output.absolute.o]);
// Not including this, with a doxxed list of payments, would allow brute forcing the inputs
// to determine RNG seeds and therefore the true spends
transcript.append_message(b"input_shared_key", input.key_offset().to_bytes());
// Ensure all signers are signing the same rings
transcript.append_message(b"real_spend", [decoys.i]);
for (i, ring_member) in decoys.ring.iter().enumerate() {
transcript
.append_message(b"ring_member", [u8::try_from(i).expect("ring size exceeded 255")]);
transcript.append_message(b"ring_member_offset", decoys.offsets[i].to_le_bytes());
transcript.append_message(b"ring_member_key", ring_member[0].compress().to_bytes());
transcript.append_message(b"ring_member_commitment", ring_member[1].compress().to_bytes());
}
}
for payment in &self.payments {
match payment {
InternalPayment::Payment(payment, need_dummy_payment_id) => {
transcript.append_message(b"payment_address", payment.0.to_string().as_bytes());
transcript.append_message(b"payment_amount", payment.1.to_le_bytes());
transcript.append_message(
b"need_dummy_payment_id",
[if *need_dummy_payment_id { 1u8 } else { 0u8 }],
);
}
InternalPayment::Change(change, change_view) => {
transcript.append_message(b"change_address", change.0.to_string().as_bytes());
transcript.append_message(b"change_amount", change.1.to_le_bytes());
if let Some(view) = change_view.as_ref() {
transcript.append_message(b"change_view_key", Zeroizing::new(view.to_bytes()));
}
}
}
}
let mut key_images = vec![];
for (i, (input, _)) in self.inputs.iter().enumerate() {
// Check this the right set of keys
let offset = keys.offset(dfg::Scalar(input.key_offset()));
if offset.group_key().0 != input.key() {
Err(TransactionError::WrongPrivateKey)?;
}
let clsag = ClsagMultisig::new(transcript.clone(), input.key(), inputs[i].clone());
key_images.push((
clsag.H,
keys.current_offset().unwrap_or(dfg::Scalar::ZERO).0 + self.inputs[i].0.key_offset(),
));
clsags.push(AlgorithmMachine::new(clsag, offset));
}
Ok(TransactionMachine {
signable: self,
i: keys.params().i(),
transcript,
key_images,
inputs,
clsags,
})
}
}
impl PreprocessMachine for TransactionMachine {
type Preprocess = Vec<Preprocess<Ed25519, ClsagAddendum>>;
type Signature = Transaction;
type SignMachine = TransactionSignMachine;
fn preprocess<R: RngCore + CryptoRng>(
mut self,
rng: &mut R,
) -> (TransactionSignMachine, Self::Preprocess) {
// Iterate over each CLSAG calling preprocess
let mut preprocesses = Vec::with_capacity(self.clsags.len());
let clsags = self
.clsags
.drain(..)
.map(|clsag| {
let (clsag, preprocess) = clsag.preprocess(rng);
preprocesses.push(preprocess);
clsag
})
.collect();
let our_preprocess = preprocesses.clone();
// We could add further entropy here, and previous versions of this library did so
// As of right now, the multisig's key, the inputs being spent, and the FROST data itself
// will be used for RNG seeds. In order to recreate these RNG seeds, breaking privacy,
// counterparties must have knowledge of the multisig, either the view key or access to the
// coordination layer, and then access to the actual FROST signing process
// If the commitments are sent in plain text, then entropy here also would be, making it not
// increase privacy. If they're not sent in plain text, or are otherwise inaccessible, they
// already offer sufficient entropy. That's why further entropy is not included
(
TransactionSignMachine {
signable: self.signable,
i: self.i,
transcript: self.transcript,
key_images: self.key_images,
inputs: self.inputs,
clsags,
our_preprocess,
},
preprocesses,
)
}
}
impl SignMachine<Transaction> for TransactionSignMachine {
type Params = ();
type Keys = ThresholdKeys<Ed25519>;
type Preprocess = Vec<Preprocess<Ed25519, ClsagAddendum>>;
type SignatureShare = Vec<SignatureShare<Ed25519>>;
type SignatureMachine = TransactionSignatureMachine;
fn cache(self) -> CachedPreprocess {
unimplemented!(
"Monero transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn from_cache(
(): (),
_: ThresholdKeys<Ed25519>,
_: CachedPreprocess,
) -> (Self, Self::Preprocess) {
unimplemented!(
"Monero transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn read_preprocess<R: Read>(&self, reader: &mut R) -> io::Result<Self::Preprocess> {
self.clsags.iter().map(|clsag| clsag.read_preprocess(reader)).collect()
}
fn sign(
mut self,
mut commitments: HashMap<Participant, Self::Preprocess>,
msg: &[u8],
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
if !msg.is_empty() {
panic!("message was passed to the TransactionMachine when it generates its own");
}
// Find out who's included
// This may not be a valid set of signers yet the algorithm machine will error if it's not
commitments.remove(&self.i); // Remove, if it was included for some reason
let mut included = commitments.keys().copied().collect::<Vec<_>>();
included.push(self.i);
included.sort_unstable();
// Convert the unified commitments to a Vec of the individual commitments
let mut images = vec![EdwardsPoint::identity(); self.clsags.len()];
let mut commitments = (0 .. self.clsags.len())
.map(|c| {
included
.iter()
.map(|l| {
// Add all commitments to the transcript for their entropy
// While each CLSAG will do this as they need to for security, they have their own
// transcripts cloned from this TX's initial premise's transcript. For our TX
// transcript to have the CLSAG data for entropy, it'll have to be added ourselves here
self.transcript.append_message(b"participant", (*l).to_bytes());
let preprocess = if *l == self.i {
self.our_preprocess[c].clone()
} else {
commitments.get_mut(l).ok_or(FrostError::MissingParticipant(*l))?[c].clone()
};
{
let mut buf = vec![];
preprocess.write(&mut buf).unwrap();
self.transcript.append_message(b"preprocess", buf);
}
// While here, calculate the key image
// Clsag will parse/calculate/validate this as needed, yet doing so here as well
// provides the easiest API overall, as this is where the TX is (which needs the key
// images in its message), along with where the outputs are determined (where our
// outputs may need these in order to guarantee uniqueness)
add_key_image_share(
&mut images[c],
self.key_images[c].0,
self.key_images[c].1,
&included,
*l,
preprocess.addendum.key_image.0,
);
Ok((*l, preprocess))
})
.collect::<Result<HashMap<_, _>, _>>()
})
.collect::<Result<Vec<_>, _>>()?;
// Remove our preprocess which shouldn't be here. It was just the easiest way to implement the
// above
for map in &mut commitments {
map.remove(&self.i);
}
// Create the actual transaction
let (mut tx, output_masks) = {
let mut sorted_images = images.clone();
sorted_images.sort_by(key_image_sort);
self.signable.prepare_transaction(
// Technically, r_seed is used for the transaction keys if it's provided
&mut ChaCha20Rng::from_seed(self.transcript.rng_seed(b"transaction_keys_bulletproofs")),
uniqueness(
&sorted_images
.iter()
.map(|image| Input::ToKey { amount: None, key_offsets: vec![], key_image: *image })
.collect::<Vec<_>>(),
),
)
};
// Sort the inputs, as expected
let mut sorted = Vec::with_capacity(self.clsags.len());
while !self.clsags.is_empty() {
let (inputs, decoys) = self.signable.inputs.swap_remove(0);
sorted.push((
images.swap_remove(0),
inputs,
decoys,
self.inputs.swap_remove(0),
self.clsags.swap_remove(0),
commitments.swap_remove(0),
));
}
sorted.sort_by(|x, y| key_image_sort(&x.0, &y.0));
let mut rng = ChaCha20Rng::from_seed(self.transcript.rng_seed(b"pseudo_out_masks"));
let mut sum_pseudo_outs = Scalar::ZERO;
while !sorted.is_empty() {
let value = sorted.remove(0);
let mut mask = random_scalar(&mut rng);
if sorted.is_empty() {
mask = output_masks - sum_pseudo_outs;
} else {
sum_pseudo_outs += mask;
}
tx.prefix.inputs.push(Input::ToKey {
amount: None,
key_offsets: value.2.offsets.clone(),
key_image: value.0,
});
*value.3.write().unwrap() = Some(ClsagDetails::new(
ClsagInput::new(value.1.commitment().clone(), value.2).map_err(|_| {
panic!("Signing an input which isn't present in the ring we created for it")
})?,
mask,
));
self.clsags.push(value.4);
commitments.push(value.5);
}
let msg = tx.signature_hash();
// Iterate over each CLSAG calling sign
let mut shares = Vec::with_capacity(self.clsags.len());
let clsags = self
.clsags
.drain(..)
.map(|clsag| {
let (clsag, share) = clsag.sign(commitments.remove(0), &msg)?;
shares.push(share);
Ok(clsag)
})
.collect::<Result<_, _>>()?;
Ok((TransactionSignatureMachine { tx, clsags }, shares))
}
}
impl SignatureMachine<Transaction> for TransactionSignatureMachine {
type SignatureShare = Vec<SignatureShare<Ed25519>>;
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare> {
self.clsags.iter().map(|clsag| clsag.read_share(reader)).collect()
}
fn complete(
mut self,
shares: HashMap<Participant, Self::SignatureShare>,
) -> Result<Transaction, FrostError> {
let mut tx = self.tx;
match tx.rct_signatures.prunable {
RctPrunable::Null => panic!("Signing for RctPrunable::Null"),
RctPrunable::Clsag { ref mut clsags, ref mut pseudo_outs, .. } => {
for (c, clsag) in self.clsags.drain(..).enumerate() {
let (clsag, pseudo_out) = clsag.complete(
shares.iter().map(|(l, shares)| (*l, shares[c].clone())).collect::<HashMap<_, _>>(),
)?;
clsags.push(clsag);
pseudo_outs.push(pseudo_out);
}
}
RctPrunable::AggregateMlsagBorromean { .. } |
RctPrunable::MlsagBorromean { .. } |
RctPrunable::MlsagBulletproofs { .. } => {
unreachable!("attempted to sign a multisig TX which wasn't CLSAG")
}
}
Ok(tx)
}
}

View File

@@ -1,326 +0,0 @@
use core::ops::Deref;
use std_shims::{sync::OnceLock, collections::HashSet};
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar};
use tokio::sync::Mutex;
use monero_serai::{
random_scalar,
rpc::{HttpRpc, Rpc},
wallet::{
ViewPair, Scanner,
address::{Network, AddressType, AddressSpec, AddressMeta, MoneroAddress},
SpendableOutput, Fee,
},
transaction::Transaction,
DEFAULT_LOCK_WINDOW,
};
pub fn random_address() -> (Scalar, ViewPair, MoneroAddress) {
let spend = random_scalar(&mut OsRng);
let spend_pub = &spend * ED25519_BASEPOINT_TABLE;
let view = Zeroizing::new(random_scalar(&mut OsRng));
(
spend,
ViewPair::new(spend_pub, view.clone()),
MoneroAddress {
meta: AddressMeta::new(Network::Mainnet, AddressType::Standard),
spend: spend_pub,
view: view.deref() * ED25519_BASEPOINT_TABLE,
},
)
}
// TODO: Support transactions already on-chain
// TODO: Don't have a side effect of mining blocks more blocks than needed under race conditions
pub async fn mine_until_unlocked(rpc: &Rpc<HttpRpc>, addr: &str, tx_hash: [u8; 32]) {
// mine until tx is in a block
let mut height = rpc.get_height().await.unwrap();
let mut found = false;
while !found {
let block = rpc.get_block_by_number(height - 1).await.unwrap();
found = match block.txs.iter().find(|&&x| x == tx_hash) {
Some(_) => true,
None => {
height = rpc.generate_blocks(addr, 1).await.unwrap().1 + 1;
false
}
}
}
// Mine until tx's outputs are unlocked
let o_indexes: Vec<u64> = rpc.get_o_indexes(tx_hash).await.unwrap();
while rpc
.get_outs(&o_indexes)
.await
.unwrap()
.into_iter()
.all(|o| (!(o.unlocked && height >= (o.height + DEFAULT_LOCK_WINDOW))))
{
height = rpc.generate_blocks(addr, 1).await.unwrap().1 + 1;
}
}
// Mines 60 blocks and returns an unlocked miner TX output.
#[allow(dead_code)]
pub async fn get_miner_tx_output(rpc: &Rpc<HttpRpc>, view: &ViewPair) -> SpendableOutput {
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
// Mine 60 blocks to unlock a miner TX
let start = rpc.get_height().await.unwrap();
rpc
.generate_blocks(&view.address(Network::Mainnet, AddressSpec::Standard).to_string(), 60)
.await
.unwrap();
let block = rpc.get_block_by_number(start).await.unwrap();
scanner.scan(rpc, &block).await.unwrap().swap_remove(0).ignore_timelock().swap_remove(0)
}
/// Make sure the weight and fee match the expected calculation.
pub fn check_weight_and_fee(tx: &Transaction, fee_rate: Fee) {
let fee = tx.rct_signatures.base.fee;
let weight = tx.weight();
let expected_weight = fee_rate.calculate_weight_from_fee(fee);
assert_eq!(weight, expected_weight);
let expected_fee = fee_rate.calculate_fee_from_weight(weight);
assert_eq!(fee, expected_fee);
}
pub async fn rpc() -> Rpc<HttpRpc> {
let rpc = HttpRpc::new("http://serai:seraidex@127.0.0.1:18081".to_string()).await.unwrap();
// Only run once
if rpc.get_height().await.unwrap() != 1 {
return rpc;
}
let addr = MoneroAddress {
meta: AddressMeta::new(Network::Mainnet, AddressType::Standard),
spend: &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE,
view: &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE,
}
.to_string();
// Mine 40 blocks to ensure decoy availability
rpc.generate_blocks(&addr, 40).await.unwrap();
// Make sure we recognize the protocol
rpc.get_protocol().await.unwrap();
rpc
}
pub static SEQUENTIAL: OnceLock<Mutex<()>> = OnceLock::new();
#[macro_export]
macro_rules! async_sequential {
($(async fn $name: ident() $body: block)*) => {
$(
#[tokio::test]
async fn $name() {
let guard = runner::SEQUENTIAL.get_or_init(|| tokio::sync::Mutex::new(())).lock().await;
let local = tokio::task::LocalSet::new();
local.run_until(async move {
if let Err(err) = tokio::task::spawn_local(async move { $body }).await {
drop(guard);
Err(err).unwrap()
}
}).await;
}
)*
}
}
#[macro_export]
macro_rules! test {
(
$name: ident,
(
$first_tx: expr,
$first_checks: expr,
),
$((
$tx: expr,
$checks: expr,
)$(,)?),*
) => {
async_sequential! {
async fn $name() {
use core::{ops::Deref, any::Any};
use std::collections::HashSet;
#[cfg(feature = "multisig")]
use std::collections::HashMap;
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::constants::ED25519_BASEPOINT_TABLE;
#[cfg(feature = "multisig")]
use transcript::{Transcript, RecommendedTranscript};
#[cfg(feature = "multisig")]
use frost::{
curve::Ed25519,
Participant,
tests::{THRESHOLD, key_gen},
};
use monero_serai::{
random_scalar,
wallet::{
address::{Network, AddressSpec}, ViewPair, Scanner, Change, Decoys, FeePriority,
SignableTransaction, SignableTransactionBuilder,
},
};
use runner::{
random_address, rpc, mine_until_unlocked, get_miner_tx_output,
check_weight_and_fee,
};
type Builder = SignableTransactionBuilder;
// Run each function as both a single signer and as a multisig
#[allow(clippy::redundant_closure_call)]
for multisig in [false, true] {
// Only run the multisig variant if multisig is enabled
if multisig {
#[cfg(not(feature = "multisig"))]
continue;
}
let spend = Zeroizing::new(random_scalar(&mut OsRng));
#[cfg(feature = "multisig")]
let keys = key_gen::<_, Ed25519>(&mut OsRng);
let spend_pub = if !multisig {
spend.deref() * ED25519_BASEPOINT_TABLE
} else {
#[cfg(not(feature = "multisig"))]
panic!("Multisig branch called without the multisig feature");
#[cfg(feature = "multisig")]
keys[&Participant::new(1).unwrap()].group_key().0
};
let rpc = rpc().await;
let view = ViewPair::new(spend_pub, Zeroizing::new(random_scalar(&mut OsRng)));
let addr = view.address(Network::Mainnet, AddressSpec::Standard);
let miner_tx = get_miner_tx_output(&rpc, &view).await;
let protocol = rpc.get_protocol().await.unwrap();
let builder = SignableTransactionBuilder::new(
protocol,
rpc.get_fee(protocol, FeePriority::Unimportant).await.unwrap(),
Change::new(
&ViewPair::new(
&random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng))
),
false
),
);
let sign = |tx: SignableTransaction| {
let spend = spend.clone();
#[cfg(feature = "multisig")]
let keys = keys.clone();
async move {
if !multisig {
tx.sign(&mut OsRng, &spend).unwrap()
} else {
#[cfg(not(feature = "multisig"))]
panic!("Multisig branch called without the multisig feature");
#[cfg(feature = "multisig")]
{
let mut machines = HashMap::new();
for i in (1 ..= THRESHOLD).map(|i| Participant::new(i).unwrap()) {
machines.insert(
i,
tx
.clone()
.multisig(
&keys[&i],
RecommendedTranscript::new(b"Monero Serai Test Transaction"),
)
.unwrap(),
);
}
frost::tests::sign_without_caching(&mut OsRng, machines, &[])
}
}
}
};
// TODO: Generate a distinct wallet for each transaction to prevent overlap
let next_addr = addr;
let temp = Box::new({
let mut builder = builder.clone();
let decoys = Decoys::fingerprintable_canonical_select(
&mut OsRng,
&rpc,
protocol.ring_len(),
rpc.get_height().await.unwrap(),
&[miner_tx.clone()],
)
.await
.unwrap();
builder.add_input((miner_tx, decoys.first().unwrap().clone()));
let (tx, state) = ($first_tx)(rpc.clone(), builder, next_addr).await;
let fee_rate = tx.fee_rate().clone();
let signed = sign(tx).await;
rpc.publish_transaction(&signed).await.unwrap();
mine_until_unlocked(&rpc, &random_address().2.to_string(), signed.hash()).await;
let tx = rpc.get_transaction(signed.hash()).await.unwrap();
check_weight_and_fee(&tx, fee_rate);
let scanner =
Scanner::from_view(view.clone(), Some(HashSet::new()));
($first_checks)(rpc.clone(), tx, scanner, state).await
});
#[allow(unused_variables, unused_mut, unused_assignments)]
let mut carried_state: Box<dyn Any> = temp;
$(
let (tx, state) = ($tx)(
protocol,
rpc.clone(),
builder.clone(),
next_addr,
*carried_state.downcast().unwrap()
).await;
let fee_rate = tx.fee_rate().clone();
let signed = sign(tx).await;
rpc.publish_transaction(&signed).await.unwrap();
mine_until_unlocked(&rpc, &random_address().2.to_string(), signed.hash()).await;
let tx = rpc.get_transaction(signed.hash()).await.unwrap();
if stringify!($name) != "spend_one_input_to_two_outputs_no_change" {
// Skip weight and fee check for the above test because when there is no change,
// the change is added to the fee
check_weight_and_fee(&tx, fee_rate);
}
#[allow(unused_assignments)]
{
let scanner =
Scanner::from_view(view.clone(), Some(HashSet::new()));
carried_state =
Box::new(($checks)(rpc.clone(), tx, scanner, state).await);
}
)*
}
}
}
}
}

View File

@@ -1,305 +0,0 @@
use rand::RngCore;
use monero_serai::{
transaction::Transaction,
wallet::{address::SubaddressIndex, extra::PaymentId},
};
mod runner;
test!(
scan_standard_address,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
builder.add_payment(view.address(Network::Mainnet, AddressSpec::Standard), 5);
(builder.build().unwrap(), scanner)
},
|_, tx: Transaction, _, mut state: Scanner| async move {
let output = state.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
let dummy_payment_id = PaymentId::Encrypted([0u8; 8]);
assert_eq!(output.metadata.payment_id, Some(dummy_payment_id));
},
),
);
test!(
scan_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(0, 1).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
scanner.register_subaddress(subaddress);
builder.add_payment(view.address(Network::Mainnet, AddressSpec::Subaddress(subaddress)), 5);
(builder.build().unwrap(), (scanner, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.subaddress, Some(state.1));
},
),
);
test!(
scan_integrated_address,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(view.address(Network::Mainnet, AddressSpec::Integrated(payment_id)), 5);
(builder.build().unwrap(), (scanner, payment_id))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8])| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, Some(PaymentId::Encrypted(state.1)));
},
),
);
test!(
scan_featured_standard,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured { subaddress: None, payment_id: None, guaranteed: false },
),
5,
);
(builder.build().unwrap(), scanner)
},
|_, tx: Transaction, _, mut state: Scanner| async move {
let output = state.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
},
),
);
test!(
scan_featured_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(0, 2).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
scanner.register_subaddress(subaddress);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: None,
guaranteed: false,
},
),
5,
);
(builder.build().unwrap(), (scanner, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.subaddress, Some(state.1));
},
),
);
test!(
scan_featured_integrated,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: None,
payment_id: Some(payment_id),
guaranteed: false,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8])| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, Some(PaymentId::Encrypted(state.1)));
},
),
);
test!(
scan_featured_integrated_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(0, 3).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
scanner.register_subaddress(subaddress);
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: Some(payment_id),
guaranteed: false,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8], SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, Some(PaymentId::Encrypted(state.1)));
assert_eq!(output.metadata.subaddress, Some(state.2));
},
),
);
test!(
scan_guaranteed_standard,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), None);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured { subaddress: None, payment_id: None, guaranteed: true },
),
5,
);
(builder.build().unwrap(), scanner)
},
|_, tx: Transaction, _, mut state: Scanner| async move {
let output = state.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
},
),
);
test!(
scan_guaranteed_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(1, 0).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), None);
scanner.register_subaddress(subaddress);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: None,
guaranteed: true,
},
),
5,
);
(builder.build().unwrap(), (scanner, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.subaddress, Some(state.1));
},
),
);
test!(
scan_guaranteed_integrated,
(
|_, mut builder: Builder, _| async move {
let view = runner::random_address().1;
let scanner = Scanner::from_view(view.clone(), None);
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: None,
payment_id: Some(payment_id),
guaranteed: true,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8])| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, Some(PaymentId::Encrypted(state.1)));
},
),
);
test!(
scan_guaranteed_integrated_subaddress,
(
|_, mut builder: Builder, _| async move {
let subaddress = SubaddressIndex::new(1, 1).unwrap();
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), None);
scanner.register_subaddress(subaddress);
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
builder.add_payment(
view.address(
Network::Mainnet,
AddressSpec::Featured {
subaddress: Some(subaddress),
payment_id: Some(payment_id),
guaranteed: true,
},
),
5,
);
(builder.build().unwrap(), (scanner, payment_id, subaddress))
},
|_, tx: Transaction, _, mut state: (Scanner, [u8; 8], SubaddressIndex)| async move {
let output = state.0.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.metadata.payment_id, Some(PaymentId::Encrypted(state.1)));
assert_eq!(output.metadata.subaddress, Some(state.2));
},
),
);

View File

@@ -1,316 +0,0 @@
use rand_core::OsRng;
use monero_serai::{
transaction::Transaction,
wallet::{
extra::Extra, address::SubaddressIndex, ReceivedOutput, SpendableOutput, Decoys,
SignableTransactionBuilder,
},
rpc::{Rpc, HttpRpc},
Protocol,
};
mod runner;
// Set up inputs, select decoys, then add them to the TX builder
async fn add_inputs(
protocol: Protocol,
rpc: &Rpc<HttpRpc>,
outputs: Vec<ReceivedOutput>,
builder: &mut SignableTransactionBuilder,
) {
let mut spendable_outputs = Vec::with_capacity(outputs.len());
for output in outputs {
spendable_outputs.push(SpendableOutput::from(rpc, output).await.unwrap());
}
let decoys = Decoys::fingerprintable_canonical_select(
&mut OsRng,
rpc,
protocol.ring_len(),
rpc.get_height().await.unwrap(),
&spendable_outputs,
)
.await
.unwrap();
let inputs = spendable_outputs.into_iter().zip(decoys).collect::<Vec<_>>();
builder.add_inputs(&inputs);
}
test!(
spend_miner_output,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 5);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
},
),
);
test!(
spend_multiple_outputs,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
builder.add_payment(addr, 2000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
assert_eq!(outputs[1].commitment().amount, 2000000000000);
outputs
},
),
(
|protocol: Protocol, rpc, mut builder: Builder, addr, outputs: Vec<ReceivedOutput>| async move {
add_inputs(protocol, &rpc, outputs, &mut builder).await;
builder.add_payment(addr, 6);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 6);
},
),
);
test!(
// Ideally, this would be single_R, yet it isn't feasible to apply allow(non_snake_case) here
single_r_subaddress_send,
(
// Consume this builder for an output we can use in the future
// This is needed because we can't get the input from the passed in builder
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
outputs
},
),
(
|protocol, rpc: Rpc<_>, _, _, outputs: Vec<ReceivedOutput>| async move {
use monero_serai::wallet::FeePriority;
let change_view = ViewPair::new(
&random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng)),
);
let mut builder = SignableTransactionBuilder::new(
protocol,
rpc.get_fee(protocol, FeePriority::Unimportant).await.unwrap(),
Change::new(&change_view, false),
);
add_inputs(protocol, &rpc, vec![outputs.first().unwrap().clone()], &mut builder).await;
// Send to a subaddress
let sub_view = ViewPair::new(
&random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng)),
);
builder.add_payment(
sub_view
.address(Network::Mainnet, AddressSpec::Subaddress(SubaddressIndex::new(0, 1).unwrap())),
1,
);
(builder.build().unwrap(), (change_view, sub_view))
},
|_, tx: Transaction, _, views: (ViewPair, ViewPair)| async move {
// Make sure the change can pick up its output
let mut change_scanner = Scanner::from_view(views.0, Some(HashSet::new()));
assert!(change_scanner.scan_transaction(&tx).not_locked().len() == 1);
// Make sure the subaddress can pick up its output
let mut sub_scanner = Scanner::from_view(views.1, Some(HashSet::new()));
sub_scanner.register_subaddress(SubaddressIndex::new(0, 1).unwrap());
let sub_outputs = sub_scanner.scan_transaction(&tx).not_locked();
assert!(sub_outputs.len() == 1);
assert_eq!(sub_outputs[0].commitment().amount, 1);
// Make sure only one R was included in TX extra
assert!(Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref())
.unwrap()
.keys()
.unwrap()
.1
.is_none());
},
),
);
test!(
spend_one_input_to_one_output_plus_change,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 2000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 2000000000000);
outputs
},
),
(
|protocol: Protocol, rpc, mut builder: Builder, addr, outputs: Vec<ReceivedOutput>| async move {
add_inputs(protocol, &rpc, outputs, &mut builder).await;
builder.add_payment(addr, 2);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 2);
},
),
);
test!(
spend_max_outputs,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
outputs
},
),
(
|protocol: Protocol, rpc, mut builder: Builder, addr, outputs: Vec<ReceivedOutput>| async move {
add_inputs(protocol, &rpc, outputs, &mut builder).await;
for i in 0 .. 15 {
builder.add_payment(addr, i + 1);
}
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut scanned_tx = scanner.scan_transaction(&tx).not_locked();
let mut output_amounts = HashSet::new();
for i in 0 .. 15 {
output_amounts.insert(i + 1);
}
for _ in 0 .. 15 {
let output = scanned_tx.swap_remove(0);
let amount = output.commitment().amount;
assert!(output_amounts.contains(&amount));
output_amounts.remove(&amount);
}
},
),
);
test!(
spend_max_outputs_to_subaddresses,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
outputs
},
),
(
|protocol: Protocol, rpc, mut builder: Builder, _, outputs: Vec<ReceivedOutput>| async move {
add_inputs(protocol, &rpc, outputs, &mut builder).await;
let view = runner::random_address().1;
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
let mut subaddresses = vec![];
for i in 0 .. 15 {
let subaddress = SubaddressIndex::new(0, i + 1).unwrap();
scanner.register_subaddress(subaddress);
builder.add_payment(
view.address(Network::Mainnet, AddressSpec::Subaddress(subaddress)),
u64::from(i + 1),
);
subaddresses.push(subaddress);
}
(builder.build().unwrap(), (scanner, subaddresses))
},
|_, tx: Transaction, _, mut state: (Scanner, Vec<SubaddressIndex>)| async move {
use std::collections::HashMap;
let mut scanned_tx = state.0.scan_transaction(&tx).not_locked();
let mut output_amounts_by_subaddress = HashMap::new();
for i in 0 .. 15 {
output_amounts_by_subaddress.insert(u64::try_from(i + 1).unwrap(), state.1[i]);
}
for _ in 0 .. 15 {
let output = scanned_tx.swap_remove(0);
let amount = output.commitment().amount;
assert!(output_amounts_by_subaddress.contains_key(&amount));
assert_eq!(output.metadata.subaddress, Some(output_amounts_by_subaddress[&amount]));
output_amounts_by_subaddress.remove(&amount);
}
},
),
);
test!(
spend_one_input_to_two_outputs_no_change,
(
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
outputs
},
),
(
|protocol, rpc: Rpc<_>, _, addr, outputs: Vec<ReceivedOutput>| async move {
use monero_serai::wallet::FeePriority;
let mut builder = SignableTransactionBuilder::new(
protocol,
rpc.get_fee(protocol, FeePriority::Unimportant).await.unwrap(),
Change::fingerprintable(None),
);
add_inputs(protocol, &rpc, vec![outputs.first().unwrap().clone()], &mut builder).await;
builder.add_payment(addr, 10000);
builder.add_payment(addr, 50000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, ()| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 10000);
assert_eq!(outputs[1].commitment().amount, 50000);
// The remainder should get shunted to fee, which is fingerprintable
assert_eq!(tx.rct_signatures.base.fee, 1000000000000 - 10000 - 50000);
},
),
);

View File

@@ -1,13 +1,13 @@
[package]
name = "serai-db"
version = "0.1.0"
version = "0.1.1"
description = "A simple database trait and backends for it"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.65"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true
@@ -18,7 +18,7 @@ workspace = true
[dependencies]
parity-db = { version = "0.4", default-features = false, optional = true }
rocksdb = { version = "0.21", default-features = false, features = ["lz4"], optional = true }
rocksdb = { version = "0.23", default-features = false, features = ["zstd"], optional = true }
[features]
parity-db = ["dep:parity-db"]

8
common/db/README.md Normal file
View File

@@ -0,0 +1,8 @@
# Serai DB
An inefficient, minimal abstraction around databases.
The abstraction offers `get`, `put`, and `del` with helper functions and macros
built on top. Database iteration is not offered, forcing the caller to manually
implement indexing schemes. This ensures wide compatibility across abstracted
databases.

View File

@@ -38,12 +38,21 @@ pub fn serai_db_key(
#[macro_export]
macro_rules! create_db {
($db_name: ident {
$($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)*
$(
$field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => {
$(
#[derive(Clone, Debug)]
pub(crate) struct $field_name;
impl $field_name {
pub(crate) struct $field_name$(
<$($generic_name: $generic_type),+>
)?$(
(core::marker::PhantomData<($($generic_name),+)>)
)?;
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn key($($arg: $arg_type),*) -> Vec<u8> {
use scale::Encode;
$crate::serai_db_key(
@@ -52,18 +61,43 @@ macro_rules! create_db {
($($arg),*).encode()
)
}
pub(crate) fn set(txn: &mut impl DbTxn $(, $arg: $arg_type)*, data: &$field_type) {
let key = $field_name::key($($arg),*);
pub(crate) fn set(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*,
data: &$field_type
) {
let key = Self::key($($arg),*);
txn.put(&key, borsh::to_vec(data).unwrap());
}
pub(crate) fn get(getter: &impl Get, $($arg: $arg_type),*) -> Option<$field_type> {
getter.get($field_name::key($($arg),*)).map(|data| {
pub(crate) fn get(
getter: &impl Get,
$($arg: $arg_type),*
) -> Option<$field_type> {
getter.get(Self::key($($arg),*)).map(|data| {
borsh::from_slice(data.as_ref()).unwrap()
})
}
// Returns a PhantomData of all generic types so if the generic was only used in the value,
// not the keys, this doesn't have unused generic types
#[allow(dead_code)]
pub(crate) fn del(txn: &mut impl DbTxn $(, $arg: $arg_type)*) {
txn.del(&$field_name::key($($arg),*))
pub(crate) fn del(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> core::marker::PhantomData<($($($generic_name),+)?)> {
txn.del(&Self::key($($arg),*));
core::marker::PhantomData
}
pub(crate) fn take(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let key = Self::key($($arg),*);
let res = txn.get(&key).map(|data| borsh::from_slice(data.as_ref()).unwrap());
if res.is_some() {
txn.del(key);
}
res
}
}
)*
@@ -73,19 +107,30 @@ macro_rules! create_db {
#[macro_export]
macro_rules! db_channel {
($db_name: ident {
$($field_name: ident: ($($arg: ident: $arg_type: ty),*) -> $field_type: ty$(,)?)*
$($field_name: ident:
$(<$($generic_name: tt: $generic_type: tt),+>)?(
$($arg: ident: $arg_type: ty),*
) -> $field_type: ty$(,)?
)*
}) => {
$(
create_db! {
$db_name {
$field_name: ($($arg: $arg_type,)* index: u32) -> $field_type,
$field_name: $(<$($generic_name: $generic_type),+>)?(
$($arg: $arg_type,)*
index: u32
) -> $field_type
}
}
impl $field_name {
pub(crate) fn send(txn: &mut impl DbTxn $(, $arg: $arg_type)*, value: &$field_type) {
impl$(<$($generic_name: $generic_type),+>)? $field_name$(<$($generic_name),+>)? {
pub(crate) fn send(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
, value: &$field_type
) {
// Use index 0 to store the amount of messages
let messages_sent_key = $field_name::key($($arg),*, 0);
let messages_sent_key = Self::key($($arg,)* 0);
let messages_sent = txn.get(&messages_sent_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
@@ -96,19 +141,35 @@ macro_rules! db_channel {
// at the same time
let index_to_use = messages_sent + 2;
$field_name::set(txn, $($arg),*, index_to_use, value);
Self::set(txn, $($arg,)* index_to_use, value);
}
pub(crate) fn try_recv(txn: &mut impl DbTxn $(, $arg: $arg_type)*) -> Option<$field_type> {
let messages_recvd_key = $field_name::key($($arg),*, 1);
pub(crate) fn peek(
getter: &impl Get
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = getter.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
let index_to_read = messages_recvd + 2;
Self::get(getter, $($arg,)* index_to_read)
}
pub(crate) fn try_recv(
txn: &mut impl DbTxn
$(, $arg: $arg_type)*
) -> Option<$field_type> {
let messages_recvd_key = Self::key($($arg,)* 1);
let messages_recvd = txn.get(&messages_recvd_key).map(|counter| {
u32::from_le_bytes(counter.try_into().unwrap())
}).unwrap_or(0);
let index_to_read = messages_recvd + 2;
let res = $field_name::get(txn, $($arg),*, index_to_read);
let res = Self::get(txn, $($arg,)* index_to_read);
if res.is_some() {
$field_name::del(txn, $($arg),*, index_to_read);
Self::del(txn, $($arg,)* index_to_read);
txn.put(&messages_recvd_key, (messages_recvd + 1).to_le_bytes());
}
res

View File

@@ -14,26 +14,87 @@ mod parity_db;
#[cfg(feature = "parity-db")]
pub use parity_db::{ParityDb, new_parity_db};
/// An object implementing get.
/// An object implementing `get`.
pub trait Get {
/// Get a value from the database.
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>;
}
/// An atomic database operation.
/// An atomic database transaction.
///
/// A transaction is only required to atomically commit. It is not required that two `Get` calls
/// made with the same transaction return the same result, if another transaction wrote to that
/// key.
///
/// If two transactions are created, and both write (including deletions) to the same key, behavior
/// is undefined. The transaction may block, deadlock, panic, overwrite one of the two values
/// randomly, or any other action, at time of write or at time of commit.
#[must_use]
pub trait DbTxn: Send + Get {
pub trait DbTxn: Sized + Send + Get {
/// Write a value to this key.
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>);
/// Delete the value from this key.
fn del(&mut self, key: impl AsRef<[u8]>);
/// Commit this transaction.
fn commit(self);
/// Close this transaction.
///
/// This is equivalent to `Drop` on transactions which can be dropped. This is explicit and works
/// with transactions which can't be dropped.
fn close(self) {
drop(self);
}
}
/// A database supporting atomic operations.
// Credit for the idea goes to https://jack.wrenn.fyi/blog/undroppable
pub struct Undroppable<T>(Option<T>);
impl<T> Drop for Undroppable<T> {
fn drop(&mut self) {
// Use an assertion at compile time to prevent this code from compiling if generated
#[allow(clippy::assertions_on_constants)]
const {
assert!(false, "Undroppable DbTxn was dropped. Ensure all code paths call commit or close");
}
}
}
impl<T: DbTxn> Get for Undroppable<T> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
self.0.as_ref().unwrap().get(key)
}
}
impl<T: DbTxn> DbTxn for Undroppable<T> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.0.as_mut().unwrap().put(key, value);
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.0.as_mut().unwrap().del(key);
}
fn commit(mut self) {
self.0.take().unwrap().commit();
let _ = core::mem::ManuallyDrop::new(self);
}
fn close(mut self) {
drop(self.0.take().unwrap());
let _ = core::mem::ManuallyDrop::new(self);
}
}
/// A database supporting atomic transaction.
pub trait Db: 'static + Send + Sync + Clone + Get {
/// The type representing a database transaction.
type Transaction<'a>: DbTxn;
/// Calculate a key for a database entry.
///
/// Keys are separated by the database, the item within the database, and the item's key itself.
fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
let db_len = u8::try_from(db_dst.len()).unwrap();
let dst_len = u8::try_from(item_dst.len()).unwrap();
[[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat()
}
fn txn(&mut self) -> Self::Transaction<'_>;
/// Open a new transaction which may be dropped.
fn unsafe_txn(&mut self) -> Self::Transaction<'_>;
/// Open a new transaction which must be committed or closed.
fn txn(&mut self) -> Undroppable<Self::Transaction<'_>> {
Undroppable(Some(self.unsafe_txn()))
}
}

View File

@@ -11,7 +11,7 @@ use crate::*;
#[derive(PartialEq, Eq, Debug)]
pub struct MemDbTxn<'a>(&'a MemDb, HashMap<Vec<u8>, Vec<u8>>, HashSet<Vec<u8>>);
impl<'a> Get for MemDbTxn<'a> {
impl Get for MemDbTxn<'_> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
if self.2.contains(key.as_ref()) {
return None;
@@ -23,7 +23,7 @@ impl<'a> Get for MemDbTxn<'a> {
.or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned())
}
}
impl<'a> DbTxn for MemDbTxn<'a> {
impl DbTxn for MemDbTxn<'_> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.2.remove(key.as_ref());
self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec());
@@ -74,7 +74,7 @@ impl Get for MemDb {
}
impl Db for MemDb {
type Transaction<'a> = MemDbTxn<'a>;
fn txn(&mut self) -> MemDbTxn<'_> {
fn unsafe_txn(&mut self) -> MemDbTxn<'_> {
MemDbTxn(self, HashMap::new(), HashSet::new())
}
}

View File

@@ -4,6 +4,7 @@ pub use ::parity_db::{Options, Db as ParityDb};
use crate::*;
#[must_use]
pub struct Transaction<'a>(&'a Arc<ParityDb>, Vec<(u8, Vec<u8>, Option<Vec<u8>>)>);
impl Get for Transaction<'_> {
@@ -11,7 +12,7 @@ impl Get for Transaction<'_> {
let mut res = self.0.get(&key);
for change in &self.1 {
if change.1 == key.as_ref() {
res = change.2.clone();
res.clone_from(&change.2);
}
}
res
@@ -36,7 +37,7 @@ impl Get for Arc<ParityDb> {
}
impl Db for Arc<ParityDb> {
type Transaction<'a> = Transaction<'a>;
fn txn(&mut self) -> Self::Transaction<'_> {
fn unsafe_txn(&mut self) -> Self::Transaction<'_> {
Transaction(self, vec![])
}
}

View File

@@ -1,42 +1,66 @@
use std::sync::Arc;
use rocksdb::{DBCompressionType, ThreadMode, SingleThreaded, Options, Transaction, TransactionDB};
use rocksdb::{
DBCompressionType, ThreadMode, SingleThreaded, LogLevel, WriteOptions,
Transaction as RocksTransaction, Options, OptimisticTransactionDB,
};
use crate::*;
impl<T: ThreadMode> Get for Transaction<'_, TransactionDB<T>> {
#[must_use]
pub struct Transaction<'a, T: ThreadMode>(
RocksTransaction<'a, OptimisticTransactionDB<T>>,
&'a OptimisticTransactionDB<T>,
);
impl<T: ThreadMode> Get for Transaction<'_, T> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
self.get(key).expect("couldn't read from RocksDB via transaction")
self.0.get(key).expect("couldn't read from RocksDB via transaction")
}
}
impl<T: ThreadMode> DbTxn for Transaction<'_, TransactionDB<T>> {
impl<T: ThreadMode> DbTxn for Transaction<'_, T> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
Transaction::put(self, key, value).expect("couldn't write to RocksDB via transaction")
self.0.put(key, value).expect("couldn't write to RocksDB via transaction")
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.delete(key).expect("couldn't delete from RocksDB via transaction")
self.0.delete(key).expect("couldn't delete from RocksDB via transaction")
}
fn commit(self) {
Transaction::commit(self).expect("couldn't commit to RocksDB via transaction")
self.0.commit().expect("couldn't commit to RocksDB via transaction");
self.1.flush_wal(true).expect("couldn't flush RocksDB WAL");
self.1.flush().expect("couldn't flush RocksDB");
}
}
impl<T: ThreadMode> Get for Arc<TransactionDB<T>> {
impl<T: ThreadMode> Get for Arc<OptimisticTransactionDB<T>> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
TransactionDB::get(self, key).expect("couldn't read from RocksDB")
OptimisticTransactionDB::get(self, key).expect("couldn't read from RocksDB")
}
}
impl<T: ThreadMode + 'static> Db for Arc<TransactionDB<T>> {
type Transaction<'a> = Transaction<'a, TransactionDB<T>>;
fn txn(&mut self) -> Self::Transaction<'_> {
self.transaction()
impl<T: Send + ThreadMode + 'static> Db for Arc<OptimisticTransactionDB<T>> {
type Transaction<'a> = Transaction<'a, T>;
fn unsafe_txn(&mut self) -> Self::Transaction<'_> {
let mut opts = WriteOptions::default();
opts.set_sync(true);
Transaction(self.transaction_opt(&opts, &Default::default()), &**self)
}
}
pub type RocksDB = Arc<TransactionDB<SingleThreaded>>;
pub type RocksDB = Arc<OptimisticTransactionDB<SingleThreaded>>;
pub fn new_rocksdb(path: &str) -> RocksDB {
let mut options = Options::default();
options.create_if_missing(true);
options.set_compression_type(DBCompressionType::Lz4);
Arc::new(TransactionDB::open(&options, &Default::default(), path).unwrap())
options.set_compression_type(DBCompressionType::Zstd);
options.set_wal_compression_type(DBCompressionType::Zstd);
// 10 MB
options.set_max_total_wal_size(10 * 1024 * 1024);
options.set_wal_size_limit_mb(10);
options.set_log_level(LogLevel::Warn);
// 1 MB
options.set_max_log_file_size(1024 * 1024);
options.set_recycle_log_file_num(1);
Arc::new(OptimisticTransactionDB::open(&options, path).unwrap())
}

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/env"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.60"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true

View File

@@ -0,0 +1,20 @@
[package]
name = "patchable-async-sleep"
version = "0.1.0"
description = "An async sleep function, patchable to the preferred runtime"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/patchable-async-sleep"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["async", "sleep", "tokio", "smol", "async-std"]
edition = "2021"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
tokio = { version = "1", default-features = false, features = [ "time"] }

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Copyright (c) 2024 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -0,0 +1,7 @@
# Patchable Async Sleep
An async sleep function, patchable to the preferred runtime.
This crate is `tokio`-backed. Applications which don't want to use `tokio`
should patch this crate to one which works witht heir preferred runtime. The
point of it is to have a minimal API surface to trivially facilitate such work.

View File

@@ -0,0 +1,10 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::time::Duration;
/// Sleep for the specified duration.
pub fn sleep(duration: Duration) -> impl core::future::Future<Output = ()> {
tokio::time::sleep(duration)
}

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/simple-requ
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["http", "https", "async", "request", "ssl"]
edition = "2021"
rust-version = "1.64"
rust-version = "1.71"
[package.metadata.docs.rs]
all-features = true
@@ -23,7 +23,7 @@ hyper-util = { version = "0.1", default-features = false, features = ["http1", "
http-body-util = { version = "0.1", default-features = false }
tokio = { version = "1", default-features = false }
hyper-rustls = { version = "0.26", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true }
hyper-rustls = { version = "0.27", default-features = false, features = ["http1", "ring", "rustls-native-certs", "native-tokio"], optional = true }
zeroize = { version = "1", optional = true }
base64ct = { version = "1", features = ["alloc"], optional = true }

View File

@@ -55,6 +55,10 @@ impl Client {
fn connector() -> Connector {
let mut res = HttpConnector::new();
res.set_keepalive(Some(core::time::Duration::from_secs(60)));
res.set_nodelay(true);
res.set_reuse_address(true);
#[cfg(feature = "tls")]
res.enforce_http(false);
#[cfg(feature = "tls")]
let res = HttpsConnectorBuilder::new()
.with_native_roots()
@@ -68,7 +72,9 @@ impl Client {
pub fn with_connection_pool() -> Client {
Client {
connection: Connection::ConnectionPool(
HyperClient::builder(TokioExecutor::new()).build(Self::connector()),
HyperClient::builder(TokioExecutor::new())
.pool_idle_timeout(core::time::Duration::from_secs(60))
.build(Self::connector()),
),
}
}

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021"
rust-version = "1.70"
rust-version = "1.80"
[package.metadata.docs.rs]
all-features = true
@@ -17,8 +17,8 @@ rustdoc-args = ["--cfg", "docsrs"]
workspace = true
[dependencies]
spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "once"] }
hashbrown = { version = "0.14", default-features = false, features = ["ahash", "inline-more"] }
spin = { version = "0.9", default-features = false, features = ["use_ticket_mutex", "lazy"] }
hashbrown = { version = "0.15", default-features = false, features = ["default-hasher", "inline-more"] }
[features]
std = []

View File

@@ -26,27 +26,6 @@ mod mutex_shim {
pub use mutex_shim::{ShimMutex as Mutex, MutexGuard};
#[cfg(feature = "std")]
pub use std::sync::OnceLock;
pub use std::sync::LazyLock;
#[cfg(not(feature = "std"))]
mod oncelock_shim {
use spin::Once;
pub struct OnceLock<T>(Once<T>);
impl<T> OnceLock<T> {
pub const fn new() -> OnceLock<T> {
OnceLock(Once::new())
}
pub fn get(&self) -> Option<&T> {
self.0.poll()
}
pub fn get_mut(&mut self) -> Option<&mut T> {
self.0.get_mut()
}
pub fn get_or_init<F: FnOnce() -> T>(&self, f: F) -> &T {
self.0.call_once(f)
}
}
}
#[cfg(not(feature = "std"))]
pub use oncelock_shim::*;
pub use spin::Lazy as LazyLock;

22
common/task/Cargo.toml Normal file
View File

@@ -0,0 +1,22 @@
[package]
name = "serai-task"
version = "0.1.0"
description = "A task schema for Serai services"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/common/task"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
publish = false
rust-version = "1.75"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
log = { version = "0.4", default-features = false, features = ["std"] }
tokio = { version = "1", default-features = false, features = ["macros", "sync", "time"] }

View File

@@ -1,6 +1,6 @@
AGPL-3.0-only license
Copyright (c) 2022-2023 Luke Parker
Copyright (c) 2022-2024 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as

3
common/task/README.md Normal file
View File

@@ -0,0 +1,3 @@
# Task
A schema to define tasks to be run ad infinitum.

161
common/task/src/lib.rs Normal file
View File

@@ -0,0 +1,161 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![deny(missing_docs)]
use core::{
fmt::{self, Debug},
future::Future,
time::Duration,
};
use tokio::sync::mpsc;
mod type_name;
/// A handle for a task.
///
/// The task will only stop running once all handles for it are dropped.
//
// `run_now` isn't infallible if the task may have been closed. `run_now` on a closed task would
// either need to panic (historic behavior), silently drop the fact the task can't be run, or
// return an error. Instead of having a potential panic, and instead of modeling the error
// behavior, this task can't be closed unless all handles are dropped, ensuring calls to `run_now`
// are infallible.
#[derive(Clone)]
pub struct TaskHandle {
run_now: mpsc::Sender<()>,
#[allow(dead_code)] // This is used to track if all handles have been dropped
close: mpsc::Sender<()>,
}
/// A task's internal structures.
pub struct Task {
run_now: mpsc::Receiver<()>,
close: mpsc::Receiver<()>,
}
impl Task {
/// Create a new task definition.
pub fn new() -> (Self, TaskHandle) {
// Uses a capacity of 1 as any call to run as soon as possible satisfies all calls to run as
// soon as possible
let (run_now_send, run_now_recv) = mpsc::channel(1);
// And any call to close satisfies all calls to close
let (close_send, close_recv) = mpsc::channel(1);
(
Self { run_now: run_now_recv, close: close_recv },
TaskHandle { run_now: run_now_send, close: close_send },
)
}
}
impl TaskHandle {
/// Tell the task to run now (and not whenever its next iteration on a timer is).
pub fn run_now(&self) {
#[allow(clippy::match_same_arms)]
match self.run_now.try_send(()) {
Ok(()) => {}
// NOP on full, as this task will already be ran as soon as possible
Err(mpsc::error::TrySendError::Full(())) => {}
Err(mpsc::error::TrySendError::Closed(())) => {
// The task should only be closed if all handles are dropped, and this one hasn't been
panic!("task was unexpectedly closed when calling run_now")
}
}
}
}
/// An enum which can't be constructed, representing that the task does not error.
pub enum DoesNotError {}
impl Debug for DoesNotError {
fn fmt(&self, _: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
// This type can't be constructed so we'll never have a `&self` to call this fn with
unreachable!()
}
}
/// A task to be continually ran.
pub trait ContinuallyRan: Sized + Send {
/// The amount of seconds before this task should be polled again.
const DELAY_BETWEEN_ITERATIONS: u64 = 5;
/// The maximum amount of seconds before this task should be run again.
///
/// Upon error, the amount of time waited will be linearly increased until this limit.
const MAX_DELAY_BETWEEN_ITERATIONS: u64 = 120;
/// The error potentially yielded upon running an iteration of this task.
type Error: Debug;
/// Run an iteration of the task.
///
/// If this returns `true`, all dependents of the task will immediately have a new iteration ran
/// (without waiting for whatever timer they were already on).
fn run_iteration(&mut self) -> impl Send + Future<Output = Result<bool, Self::Error>>;
/// Continually run the task.
fn continually_run(
mut self,
mut task: Task,
dependents: Vec<TaskHandle>,
) -> impl Send + Future<Output = ()> {
async move {
// The default number of seconds to sleep before running the task again
let default_sleep_before_next_task = Self::DELAY_BETWEEN_ITERATIONS;
// The current number of seconds to sleep before running the task again
// We increment this upon errors in order to not flood the logs with errors
let mut current_sleep_before_next_task = default_sleep_before_next_task;
let increase_sleep_before_next_task = |current_sleep_before_next_task: &mut u64| {
let new_sleep = *current_sleep_before_next_task + default_sleep_before_next_task;
// Set a limit of sleeping for two minutes
*current_sleep_before_next_task = new_sleep.max(Self::MAX_DELAY_BETWEEN_ITERATIONS);
};
loop {
// If we were told to close/all handles were dropped, drop it
{
let should_close = task.close.try_recv();
match should_close {
Ok(()) | Err(mpsc::error::TryRecvError::Disconnected) => break,
Err(mpsc::error::TryRecvError::Empty) => {}
}
}
match self.run_iteration().await {
Ok(run_dependents) => {
// Upon a successful (error-free) loop iteration, reset the amount of time we sleep
current_sleep_before_next_task = default_sleep_before_next_task;
if run_dependents {
for dependent in &dependents {
dependent.run_now();
}
}
}
Err(e) => {
// Get the type name
let type_name = type_name::strip_type_name(core::any::type_name::<Self>());
// Print the error as a warning, prefixed by the task's type
log::warn!("{type_name}: {e:?}");
increase_sleep_before_next_task(&mut current_sleep_before_next_task);
}
}
// Don't run the task again for another few seconds UNLESS told to run now
/*
We could replace tokio::mpsc with async_channel, tokio::time::sleep with
patchable_async_sleep::sleep, and tokio::select with futures_lite::future::or
It isn't worth the effort when patchable_async_sleep::sleep will still resolve to tokio
*/
tokio::select! {
() = tokio::time::sleep(Duration::from_secs(current_sleep_before_next_task)) => {},
msg = task.run_now.recv() => {
// Check if this is firing because the handle was dropped
if msg.is_none() {
break;
}
},
}
}
}
}
}

View File

@@ -0,0 +1,31 @@
/// Strip the modules from a type name.
// This may be of the form `a::b::C`, in which case we only want `C`
pub(crate) fn strip_type_name(full_type_name: &'static str) -> String {
// It also may be `a::b::C<d::e::F>`, in which case, we only attempt to strip `a::b`
let mut by_generics = full_type_name.split('<');
// Strip to just `C`
let full_outer_object_name = by_generics.next().unwrap();
let mut outer_object_name_parts = full_outer_object_name.split("::");
let mut last_part_in_outer_object_name = outer_object_name_parts.next().unwrap();
for part in outer_object_name_parts {
last_part_in_outer_object_name = part;
}
// Push back on the generic terms
let mut type_name = last_part_in_outer_object_name.to_string();
for generic in by_generics {
type_name.push('<');
type_name.push_str(generic);
}
type_name
}
#[test]
fn test_strip_type_name() {
assert_eq!(strip_type_name("core::option::Option"), "Option");
assert_eq!(
strip_type_name("core::option::Option<alloc::string::String>"),
"Option<alloc::string::String>"
);
}

View File

@@ -7,7 +7,7 @@ repository = "https://github.com/serai-dex/serai/tree/develop/common/zalloc"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
rust-version = "1.60"
rust-version = "1.77"
[package.metadata.docs.rs]
all-features = true
@@ -19,8 +19,10 @@ workspace = true
[dependencies]
zeroize = { version = "^1.5", default-features = false }
[build-dependencies]
rustversion = { version = "1", default-features = false }
[features]
std = ["zeroize/std"]
default = ["std"]
# Commented for now as it requires nightly and we don't use nightly
# allocator = []
allocator = []

Some files were not shown because too many files have changed in this diff Show More