358 Commits

Author SHA1 Message Date
Luke Parker
7685cc305f Correct monero-serai for aggressive clippy 2023-07-08 01:46:44 -04:00
Luke Parker
3ca76c51e4 Refine from pedantic, remove erratic consts 2023-07-08 01:26:08 -04:00
Luke Parker
286e96ccd8 Remove must_use spam 2023-07-08 01:06:38 -04:00
Luke Parker
f93106af6b Mostly lint Monero 2023-07-08 00:57:07 -04:00
Luke Parker
dd5fb0df47 Finish updating crypto to new clippy 2023-07-07 23:08:14 -04:00
Luke Parker
3a626cc51e Port common, and most of crypto, to a more aggressive clippy 2023-07-07 22:05:07 -04:00
Luke Parker
3c6cc42c23 Modify get_transactions to split requests as to not hit the restricted RPC limits 2023-07-05 22:12:34 -04:00
Luke Parker
93fe8a52dd Don't call get_height every block 2023-07-05 20:11:31 -04:00
Luke Parker
249f7b904f Support multiple RPCs in the reserialize_chain bin 2023-07-05 19:50:35 -04:00
Luke Parker
8ce8657d34 Correct how Monero integration tests are run 2023-07-05 19:11:52 -04:00
Luke Parker
e5a196504c Add a bin to download a chain, over RPC, reserializing and hashing every item
Parallelized. Doesn't check the deserialization is correct. Does use distinct,
persistent HTTP clients.
2023-07-05 18:59:22 -04:00
Luke Parker
b195db0929 Fix the known issue with the DSA
I wrote it to only select TXs with a timelock, not only TXs which are unlocked.
This most likely explains why it so heavily selected coinbases.

Also moves an InternalError which would've never been hit on mainnet, yet
technically isn't an invariant, to only exist when cfg(test).
2023-07-04 18:11:57 -04:00
Boog900
89eef95fb3 Monero: support for legacy transactions (#308)
* add mlsag

* fix last commit

* fix miner v1 txs

* fix non-miner v1 txs

* add borromean + fix mlsag

* add block hash calculations

* fix for the jokester that added unreduced scalars

to the borromean signature of
2368d846e671bf79a1f84c6d3af9f0bfe296f043f50cf17ae5e485384a53707b

* Add Borromean range proof verifying functionality

* Add MLSAG verifying functionality

* fmt & clippy :)

* update MLSAG, ss2_elements will always be 2

* Add MgSig proving

* Tidy block.rs

* Tidy Borromean, fix bugs in last commit, replace todo! with unreachable!

* Mark legacy EcdhInfo amount decryption as experimental

* Correct comments

* Write a new impl of the merkle algorithm

This one tries to be understandable.

* Only pull in things only needed for experimental when experimental

* Stop caching the Monero block hash now in processor that we have Block::hash

* Corrections for recent processor commit

* Use a clearer algorithm for the merkle

Should also be more efficient due to not shifting as often.

* Tidy Mlsag

* Remove verify_rct_* from Mlsag

Both methods were ports from Monero, overtly specific without clear
documentation. They need to be added back in, with documentation, or included
in a node which provides the necessary further context for them to be naturally
understandable.

* Move mlsag/mod.rs to mlsag.rs

This should only be a folder if it has multiple files.

* Replace EcdhInfo terminology

The ECDH encrypted the amount, yet this struct contained the encrypted amount,
not some ECDH.

Also corrects the types on the original EcdhInfo struct.

* Correct handling of commitment masks when scanning

* Route read_array through read_raw_vec

* Misc lint

* Make a proper RctType enum

No longer caches RctType in the RctSignatures as well.

* Replace Vec<Bulletproofs> with Bulletproofs

Monero uses aggregated range proofs, so there's only ever one Bulletproof. This
is enforced with a consensus rule as well, making this safe.

As for why Monero uses a vec, it's probably due to the lack of variadic typing
used. Its effectively an Option for them, yet we don't need an Option since we
do have variadic typing (enums).

* Add necessary checks to Eventuality re: supported protocols

* Fix for block 202612 and fix merkel root calculations

* MLSAG (de)serialisation fix

ss_2_elements will not always be 2 as rct type 1 transactions are not enforced to have one input

* Revert "MLSAG (de)serialisation fix"

This reverts commit 5e710e0c96.

here it checks number of MGs == number of inputs:
0a1eaf26f9/src/cryptonote_core/tx_verification_utils.cpp (L60-59)

and here it checks for RctTypeFull number of MGs == 1:
0a1eaf26f9/src/ringct/rctSigs.cpp (L1325)

so number of inputs == 1
so ss_2_elements == 2

* update `MlsagAggregate` comment

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-07-04 17:18:05 -04:00
Luke Parker
0f80f6ec7d Move location of serai-client in Cargo.toml 2023-07-04 13:21:52 -04:00
Luke Parker
740274210b cargo update
Resolves a yanked crate
2023-07-04 13:20:59 -04:00
Luke Parker
6ac57be4e3 Disable Rust caching
We hit the cache limit after just one or two builds, making it infeasible.
2023-07-03 18:03:09 -04:00
Luke Parker
08e7ca955b Correct depends for processor-messages 2023-07-03 12:40:56 -04:00
Luke Parker
239800cfcf Update monero-tests workflow to new name for the processor 2023-07-03 09:12:29 -04:00
Luke Parker
d49c636f0f Use serai- prefixes on Serai-specific packages
Fixes deny.toml, also runs a minor cargo update shrinking the tree.
2023-07-03 08:50:23 -04:00
Boog900
30834fe4d2 std-shims: six Read for &[u8] 2023-07-03 07:13:06 -04:00
GitHub Actions
d928b787f7 Update nightly 2023-07-03 07:10:53 -04:00
Luke Parker
c7b232949a Correct deny.toml with inclusion of message-queue 2023-07-03 07:09:35 -04:00
Luke Parker
acf2469dd8 cargo update
Resolves https://github.com/serai-dex/serai/security/dependabot/29
2023-07-01 20:27:03 -04:00
Luke Parker
6267acf3df Add a message queue
This is intended to be a reliable transport between the processors and
coordinator. Since it'll be intranet only, it's written as never fail.

Primarily needs testing and a proper ID.
2023-07-01 08:53:46 -04:00
Luke Parker
a95ecc2512 Represent RCT amounts with None, not 0.
Fixes #282.

Does allow any v1 TXs which exist, and v2 miner-TXs, to specify Some(0). As far
as I can tell, both were/are theoreitcally possible.
2023-06-29 13:16:51 -04:00
Luke Parker
ac708b3b2a no-std support for monero-serai (#311)
* Move monero-serai from std to std-shims, where possible

* no-std fixes

* Make the HttpRpc its own feature, thiserror only on std

* Drop monero-rs's epee for a homegrown one

We only need it for a single function. While I tried jeffro's, it didn't work
out of the box, had three unimplemented!s, and is no where near viable for
no_std.

Fixes #182, though should be further tested.

* no-std monero-serai

* Allow base58-monero via git

* cargo fmt
2023-06-29 04:14:29 -04:00
Luke Parker
d25c668ee4 Replace lazy_static with OnceLock inside monero-serai
lazy_static, if no_std environments were used, effectively required always
using spin locks. This resolves the ergonomics of that while adopting Rust std
code.

no_std does still use a spin based solution. Theoretically, we could use
atomics, yet writing our own Mutex wasn't a priority.
2023-06-28 21:45:57 -04:00
GitHub Actions
8ced63eaac Update nightly 2023-06-28 18:42:19 -04:00
Luke Parker
f6a497f3ac Slight terminology correction in sync test
Also correct a mistake from merging the most recent polkadot version.
2023-06-28 15:20:50 -04:00
akildemir
790fe7ee23 fix tributary sync test 2023-06-28 15:01:55 -04:00
Luke Parker
8c020abb86 Update to substrate polkadot-v0.9.43 2023-06-28 14:57:58 -04:00
Luke Parker
21f0bb2721 Pin setup-protoc to v2.0.0 2023-06-28 12:28:14 -04:00
Luke Parker
385ed2e97a Build no-std tests with RISC-V 32 IMAC
Turns out wasm still has std, making it suboptimal to use here.
2023-06-28 12:26:53 -04:00
Luke Parker
fca567f61d cargo update
Resolves an openssl advisory and nets ~-8 crates.
2023-06-22 06:25:33 -04:00
Luke Parker
dfa3106a38 Fix incorrect sig_hash generation
sig_hash was used as a challenge. challenges should be of the form H(R, A, m).
These sig hashes were solely H(A, m), allowing trivial forgeries.
2023-06-08 06:38:25 -04:00
Luke Parker
c6982b5dfc Ensure canonical points in the cross-group DLEq proof 2023-05-30 22:05:52 -04:00
Luke Parker
1aa293cc4a Fix for prior commit 2023-05-27 04:15:57 -04:00
Luke Parker
8a24fc39a6 Only scan v2 Monero TXs 2023-05-27 04:13:40 -04:00
Luke Parker
40b2920412 Remove signed Substrate TXs from Coordinator 2023-05-13 22:43:13 -04:00
Luke Parker
47f8766da6 Use proper messages for ValidatorSets/InInstructions pallet
Provides a DST, and associated metadata as beneficial.

Also utilizes MuSig's context to session-bind. Since set_keys_messages also
binds to set, this is semi-redundant, yet that's appreciated.
2023-05-13 04:40:16 -04:00
Luke Parker
663b5f4b50 Add a context to MuSig key aggregation 2023-05-13 04:04:14 -04:00
Luke Parker
227176e4b8 Correct various no_std definitions 2023-05-13 04:03:56 -04:00
Luke Parker
f069567f12 Use a MuSig signature to publish validator set key pairs to Serai
The processor/coordinator flow still has to be rewritten.
2023-05-13 02:15:41 -04:00
Luke Parker
84c2d73093 Do the minimal amount of work for dkg to compile under no-std
The Substrate runtime requires access to the MuSig key aggregation function.

\#279 related.
2023-05-12 23:25:17 -04:00
Luke Parker
4d50b6892c Add a dedicated function to get a MuSig key 2023-05-11 03:21:54 -04:00
Luke Parker
3eade48a6f cargo update
Resolves a yanked crate and removes some duplicated dependencies.
2023-05-10 07:34:07 -04:00
Luke Parker
89974c529a Correct 2/3rds definitions throughout the codebase
The prior formula failed for some values, such as 20.
20 / 3 = 6, * 2 = 12, + 1 = 13. 13 is 65%, not >= 67.
2023-05-10 06:29:21 -04:00
Luke Parker
ffea02dfbf Implement MuSig key aggregation into DKG
Isn't spec compliant due to the lack of a spec to be compliant too.

Slight deviation from the paper by using a unique list instead of a multiset.

Closes #186, progresses #277.
2023-05-10 06:25:40 -04:00
Luke Parker
f55e9b40e6 Have coordinator publish batches to Substrate 2023-05-10 01:46:20 -04:00
Luke Parker
a70df6a449 Remove TODO about code de-duplication
It's infeasible to write a macro/function there. Does add a type alias which
makes things cleaner.
2023-05-10 01:19:01 -04:00
Luke Parker
168f2899f0 Create a vote transaction upon GeneratedKeyPair 2023-05-10 00:46:51 -04:00
Luke Parker
c95bdb6752 Properly get genesis for a Processor message 2023-05-09 23:51:05 -04:00
Luke Parker
88f0e89350 Ensure Tributary commits are minimal 2023-05-09 23:45:05 -04:00
Luke Parker
7b7ddbdd97 Move the coordinator to a n-processor design 2023-05-09 23:44:41 -04:00
Luke Parker
9175383e89 Spawn a new async task for each block message
This probably should be done with n-long lived tasks, one per Tributary. While
this may not be suitably performant long-term (potential DoS vector), this at
least resolves the halting concerns.
2023-05-09 17:05:33 -04:00
Luke Parker
029b6c53a1 Use U448 for Ed448 instead of U512 2023-05-09 04:12:13 -04:00
Luke Parker
219adc7657 Rename uid to intent 2023-05-08 22:21:41 -04:00
Luke Parker
964fdee175 Publish ExternablBlock/SubstrateBlock, delay *Preprocess until ID acknowledged
Adds a channel for the Tributary scanner to communicate when an ID has been
acknowledged.
2023-05-08 22:20:51 -04:00
Luke Parker
a7f2740dfb Correct Serai Dockerfile 2023-05-08 02:08:31 -04:00
Luke Parker
0c9c1aeff1 Correct processor's handling of the new Monero RPC code 2023-05-02 03:40:49 -04:00
Luke Parker
adfbde6e24 Support arbitrary RPC providers in monero-serai
Sets a clean path for no-std premised RPCs (buffers to an external RPC impl)/
Tor-based RPCs/client-side load balancing/...
2023-05-02 02:39:08 -04:00
Luke Parker
5765d1d278 Update to May's nightly
Doesn't use the PR due to the needed changes.
2023-05-01 04:58:50 -04:00
Luke Parker
78c00bde3d Correct error message in ff-group-tests 2023-05-01 03:18:11 -04:00
Luke Parker
c0001f5ff2 Update to substrate polkadot-v0.9.42 2023-05-01 03:17:37 -04:00
Luke Parker
6032af6692 Have Coordinator MainDb take a mutable borrow 2023-04-26 00:10:06 -04:00
Luke Parker
7824b6cb8b Document the processor/tributary/coordinator/serai flow 2023-04-25 15:05:58 -04:00
Luke Parker
78d5372fb7 Initial code to handle messages from processors 2023-04-25 03:14:42 -04:00
Luke Parker
cc531d630e Add a UID function to messages
When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.

Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.
2023-04-25 02:46:18 -04:00
Luke Parker
09d96822ca Document a pair of panics requiring 256 GB of RAM/4 GB of a context 2023-04-24 23:49:06 -04:00
Luke Parker
7a8f8c2d3d Document panic in FROST 2023-04-24 23:19:23 -04:00
Luke Parker
e74b4ab94f Add a TributaryReader which doesn't require a borrow to operate
Reduces lock contention.

Additionally changes block_key to include the genesis. While not technically
needed, the lack of genesis introduced a side effect where any Tributary on the
the database could return the block of any other Tributary. While that wasn't a
security issue, returning it suggested it was on-chain when it wasn't. This may
have been usable to create issues.
2023-04-24 07:02:00 -04:00
Luke Parker
e0820759c0 Tweak tests workflow 2023-04-24 06:16:43 -04:00
Luke Parker
2feebe536e Test handle_p2p and Tributary syncing
Includes bug fixes.
2023-04-24 03:30:19 -04:00
Luke Parker
cc491ee1e1 Don't return from sync_block until the Tendermint machine returns if it's valid or not
We had a race condition where'd we be informed of blocks 1 .. 3, and
immediately add 1 .. 3. Because we immediately tried to add 2 after 1, it'd
fail since the tip was still the genesis, yet 2 needs the tip to be 1.

Adding a channel, while ugly, was the simplest way to accomplish this.

Also has any added block be broadcasted. Else there's a race condition where a
node which syncs up to the most recent block does so, yet fails to add the next
block when it's committed to.
2023-04-24 02:46:13 -04:00
Luke Parker
14388e746c Implement Tributary syncing
Also adds a forwards-lookup to the Tributary blockchain.
2023-04-24 00:53:18 -04:00
Luke Parker
215155f84b Remove reliance on a blockchain read lock from block/commit 2023-04-23 23:51:10 -04:00
Luke Parker
c476f9b640 Break coordinator main into multiple functions
Also moves from std::sync::RwLock to tokio::sync::RwLock to prevent wasting
cycles on spinning.
2023-04-23 23:15:15 -04:00
Luke Parker
be8c25aef0 Move json word lists to rs
Allows building the seed code without serde_json.
2023-04-23 22:26:05 -04:00
Luke Parker
fb296a9c2e cargo update 2023-04-23 19:21:24 -04:00
Luke Parker
aa0ec4ac41 cargo fmt 2023-04-23 18:56:48 -04:00
Luke Parker
05b1fc5f05 Send a heartbeat message when a Tributary falls behind 2023-04-23 18:55:43 -04:00
Luke Parker
72633d6421 Clarify Arc RwLocks and sleeps in coordinator 2023-04-23 18:29:50 -04:00
Luke Parker
ad5522d854 Start handling P2P messages
This defines the tart of a very complex series of locks I'm really unhappy
with. At the same time, there's not immediately a better solution. This also
should work without issue.
2023-04-23 17:01:30 -04:00
Luke Parker
f2d9d70068 Reload Tributaries
add_active_tributary writes the spec to disk before it returns, so even if the
VecDeque it pushes to isn't popped, the tributary will still be loaded on boot.
2023-04-23 04:31:00 -04:00
Luke Parker
2b09309adc Handle adding new Tributaries
Removes last_block as an argument from Tendermint. It now loads from the DB as
needed. While slightly less performant, it's easiest and should be fine.
2023-04-23 03:51:26 -04:00
Luke Parker
bf9ec410db Additionally test DKGShares 2023-04-23 02:18:46 -04:00
Luke Parker
e0dc5d29ad Tributary test wait_for_tx_inclusion function 2023-04-23 01:52:19 -04:00
Luke Parker
710e6e5217 Add Transaction::sign.
While I don't love the introduction of empty_signed, it's practically fine.
2023-04-23 01:25:45 -04:00
Luke Parker
3f6565588f Test handling of DKG commitments transactions 2023-04-23 01:00:46 -04:00
Luke Parker
af84b7f707 Add a test for Tributary
Further fleshes out the Tributary testing code.
2023-04-22 22:28:20 -04:00
Luke Parker
8c74576cf0 Add a test to the coordinator for running a Tributary
Impls a LocalP2p for testing.

Moves rebroadcasting into Tendermint, since it's what knows if a message is
fully valid + original.

Removes TributarySpec::validators() HashMap, as its non-determinism caused
different instances to have different round robin schedules. It was already
prior moved to a Vec for this issue, so I'm unsure why this remnant existed.

Also renames the GH no-std workflow from the prior commit.
2023-04-22 10:49:52 -04:00
Luke Parker
1e448dec21 Add no_std support to transcript, dalek-ff-group, ed448, ciphersuite, multiexp, schnorr, and monero-generators
transcript, dalek-ff-group, ed449, and ciphersuite are all usable with no_std
alone. The rest additionally require alloc.

Part of #279.
2023-04-22 04:38:47 -04:00
Luke Parker
ef0c901455 Add recent bloat checks added to signer to substrate_signer as well 2023-04-20 15:45:32 -04:00
Luke Parker
09c3c9cc9e Route the SubstrateBlock message, which is the last Tributary transaction type 2023-04-20 15:37:22 -04:00
Luke Parker
a404944b90 Add a SubstrateBlockAck message to the processor
When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.

This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.

Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.

This creates an explicitly proper async flow not reliant on waiting for data
availability.

Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.
2023-04-20 15:26:22 -04:00
Luke Parker
70d866af6a ExternalBlock handler 2023-04-20 14:51:33 -04:00
Luke Parker
f99a91b34d Slash on unrecognized ID 2023-04-20 14:33:19 -04:00
Luke Parker
294ad08e00 Add support for multiple orderings in Provided
Necessary as our Tributary chains needed to agree when a Serai block has
occurred, and when a Monero block has occurred. Since those could happen at the
same time, some validators may put SeraiBlock before ExternalBlock and vice
versa, causing a chain halt. Now they can have distinct ordering queues.
2023-04-20 07:32:40 -04:00
Luke Parker
a26ca1a92f Split FinalizedBlock into ExternalBlock and SeraiBlock
Also re-arranges their orders.
2023-04-20 06:59:42 -04:00
Luke Parker
9c2a44f9df Apply DKG TX handling code to all sign TXs
The existing code was almost entirely applicable. It just needed to be scoped
with an ID. While the handle function is now a bit convoluted, I don't see a
better option.
2023-04-20 06:27:05 -04:00
Luke Parker
8b5eaa8092 Add additional checks to key_gen/sign
There is the ability to cause state bloat by flooding Tributary.
KeyGen/Sign specifically shouldn't allow bloat since we check the
commitments/preprocesses/shares for validity. Accordingly, any invalid data
(such as bloat) should be detected.

It was posssible to place bloat after the valid data. Doing so would be
considered a valid KeyGen/Sign message, yet could add up to 50k kB per sign.
2023-04-20 05:36:48 -04:00
Luke Parker
8041a0d845 Initial Tributary handling 2023-04-20 05:05:17 -04:00
Luke Parker
9e1f3fc85c Make MainDB into SubstrateDB 2023-04-20 05:04:08 -04:00
Luke Parker
ee65e4df8f Resolve #68
Notably speeds up monero-serai's build and CLSAG performance.
2023-04-20 01:18:16 -04:00
Luke Parker
ff2febe5aa Move the entirety of ed448 to Residue, offering a further 2-4x speedup 2023-04-19 04:02:59 -04:00
Luke Parker
334873b6a5 Use crypto-bigint's reduction in ed448
Achieves feasible performance in the ed448 which makes it potentially viable
for real world usage.

Accordingly prepares a new release, updating the README.
2023-04-19 03:35:57 -04:00
Luke Parker
21026136bd Save keys by their tweaked group_key
Keys are referred to by their tweaked versions. If a tweak was needed, keys
would fail to confirm.
2023-04-18 14:55:15 -04:00
Luke Parker
396e5322b4 Code a method to determine the activation block before any block has consensus
[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.

This doesn't use an Option for ergonomic reasons.
2023-04-18 03:04:52 -04:00
Luke Parker
9da0eb69c7 Use an enum for Coin/NetworkId
It originally wasn't an enum so software which had yet to update before an
integration wouldn't error (as now enums are strictly typed). The strict typing
is preferable though.
2023-04-18 02:04:47 -04:00
Luke Parker
6f3b5f4535 Tweak ConfirmKeyPair to alleviate database requirements of coordinator 2023-04-18 01:09:22 -04:00
Luke Parker
e880ebb5a9 Clarify safety of Scanner::block_number and KeyGen::keys 2023-04-18 00:26:19 -04:00
Luke Parker
1036e673ce cargo update to remove usage of yanked crate 2023-04-17 23:59:04 -04:00
Luke Parker
fd1bbec134 Use a single txn for an entire coordinator message
Removes direct DB accesses whre possible. Documents the safety of the rest.
Does uncover one case of unsafety not previously noted.
2023-04-17 23:55:12 -04:00
Luke Parker
7579c71765 Add note to processor_messages 2023-04-17 23:11:44 -04:00
Luke Parker
5a499de4ca Remove BatchSigned
SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.
2023-04-17 20:19:15 -04:00
Luke Parker
e26b861d25 Move ConfirmKeyPair from key_gen to substrate
Clarifies the emitter and accordingly why its mutations are justified.
2023-04-17 19:40:17 -04:00
Luke Parker
059e79c98a Add extensive commentary on mutable to the processor's main file
Clearly establishes why consistency is guaranteed from a Rust borrow-checker
mindset. While there are plenty of... 'violations', they're clearly explained.

Hopefully, this method of thinking helps promote/ensure consistency in the
future.
2023-04-17 19:24:02 -04:00
Luke Parker
92a868e574 Add a processor API to the coordinator 2023-04-17 02:10:33 -04:00
Luke Parker
595cd6d404 Rename transaction file to tributary, add function for genesis 2023-04-17 02:09:29 -04:00
Luke Parker
4d43c04916 Clean up the Substrate block processing code 2023-04-17 00:50:56 -04:00
Luke Parker
2604746586 Fill out code for the rest of the Substrate events 2023-04-16 03:18:52 -04:00
Luke Parker
36cdf6d4bf Have InInstructions track the latest block for a network in storage 2023-04-16 02:57:19 -04:00
Luke Parker
9676584ffe Resolve #245 2023-04-16 01:03:32 -04:00
Luke Parker
79655672ef Make progres on handling NewSet events
Further bones out the coordinator.
2023-04-16 00:51:56 -04:00
Luke Parker
fa2cf03e61 Support extracting timestamps from blocks 2023-04-16 00:31:54 -04:00
Luke Parker
92ad689c7e cargo update
Since p256 now pulls in an extra crate with this update, the {k,p}256 imports
disable default-features to prevent growing the tree.
2023-04-15 23:21:18 -04:00
Luke Parker
b2169a7316 cargo +nightly fmt 2023-04-15 23:09:39 -04:00
Luke Parker
e2571a43aa Correct processor flow to have the coordinator decide signing set/re-attempts
The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.

Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.
2023-04-15 23:01:07 -04:00
Luke Parker
e21fc5ff3c Merge AckBlock with Burns
Offers greater efficiency while reducing concerns re: atomicity.
2023-04-15 18:38:40 -04:00
Luke Parker
eafd054296 Start defining the coordinator 2023-04-15 17:38:47 -04:00
Luke Parker
51bf51ae1e Make unsigned private due to unsafe calling potential 2023-04-15 16:37:59 -04:00
Luke Parker
28b6bc99ac Update to the latest subxt
Writes a custom unsigned extrinic creator due to subxt having an internal error
with the scale metadata. While the code in our scope increased, it's much more
ergonomic to our usage. We may end up rewriting most of subxt, eventually.
2023-04-15 05:23:57 -04:00
Luke Parker
ce883104b7 cargo update 2023-04-15 03:27:45 -04:00
Luke Parker
f48022c6eb Add basic getters to tributary 2023-04-15 00:41:48 -04:00
Luke Parker
124b994c23 Add a NewSet event to validator-sets
Updates to the latest serai-dex/substrate due to depending on
10ccaca0eb498a2316bbf627d419b29b1a75933a.
2023-04-15 00:40:33 -04:00
Luke Parker
2e2bc59703 Support reloading the mempool from disk 2023-04-14 15:51:56 -04:00
Luke Parker
c032f66f8a must_use annotations on DbTxn 2023-04-14 15:04:26 -04:00
Luke Parker
695d923593 Reloaded provided transactions from the disk
Also resolves a race condition by asserting provided transactions must be
unique, allowing them to be safely provided multiple times.
2023-04-14 15:03:01 -04:00
Luke Parker
63318cb728 Add a DB to Tributary
Adds support for reloading most of the blockchain.
2023-04-14 14:11:40 -04:00
Luke Parker
6f6c9f7cdf Add a dedicated db crate with a basic DB trait
It's needed by the processor and tributary (coordinator).
2023-04-14 11:47:43 -04:00
Luke Parker
04e7863dbd Documentation and cargo update 2023-04-13 21:06:11 -04:00
Luke Parker
a5002c50ec Fix the scheduler from dropping UTXOs when there weren't any payments 2023-04-13 20:59:36 -04:00
Luke Parker
72dd665ebf Add DoS limits to tributary and require provided transactions be ordered 2023-04-13 20:35:55 -04:00
Luke Parker
8b1bce6abd Add correction the last commit missed 2023-04-13 18:47:34 -04:00
Luke Parker
e73a51bfa5 Finish binding Tendermint into Tributary and define a Tributary master object 2023-04-13 18:43:27 -04:00
Luke Parker
5858b6c03e Replace Tendermint step with sync_block
Step moved a step forward after an externally synced/added block. This created
a race condition to add the block between the sync process and the Tendermint
machine. Now that the block routes through Tendermint, there is no such race
condition.
2023-04-13 18:18:29 -04:00
Luke Parker
9bea368d36 Plan scheduled payments whenever outputs are received
The scheduler prior waited for the next series of payments to be added.
2023-04-13 15:41:56 -04:00
Luke Parker
a509dbfad6 Embed the mempool into the Blockchain 2023-04-13 09:47:14 -04:00
Luke Parker
03a6470a5b Finish binding Tendermint, bar the P2P layer 2023-04-12 18:04:28 -04:00
Luke Parker
997dd611d5 Don't add blocks which aren't valid
Previously, Tendermint needed to be live more than it needed to be correct.
Under the original intention for it, correctness would fail if any coin
desynced, which would cause the node to fail entirely. By accepting a
supermajority's view of state, despite its own, a single coin's failure would
only lead to inability to participate with that single coin.

Now that Tendermint is solely for Tributary, nodes should halt a coin-specific
chain if their view of the chain differs. They are unable to meaningless
participate regardless.

This also means a supermajority of validators can no longer fake messages from
other validators, allowing the Tributary chain to use uniform weights with much
less impact. There is still enough impact they can't be used (ability to cause
a fork), yet they should allow uniform block production (as that's solely a DoS
concern).

While we prior could've simply additionally checked signatures, add_block's
lack of a failure case would've meant it had to panic. This would've been a DoS
possible a minority-weight *which affected the entire coordinator* and
therefore *the entire validator for all coins*.
2023-04-12 16:18:42 -04:00
Luke Parker
86cbf6e02e Bind the signature scheme for tendermint-machine 2023-04-12 16:06:14 -04:00
Luke Parker
8c8232516d Only allow designated participants to send transactions 2023-04-12 12:42:23 -04:00
Luke Parker
be947ce152 Add a mempool 2023-04-12 12:15:38 -04:00
Luke Parker
7c7f17aac6 Test the blockchain 2023-04-12 11:13:48 -04:00
Luke Parker
ff5c240fcc Fix a bug in the merkle algorithm 2023-04-12 10:52:28 -04:00
Luke Parker
d5a12a9b97 Make TransactionKind have a reference to Signed
Broken commit due to partial staging of one file.
2023-04-12 09:38:20 -04:00
Luke Parker
354ac856a5 Extensively test transactions 2023-04-12 08:51:40 -04:00
Luke Parker
402a7be966 Block contructor and tests 2023-04-11 20:24:27 -04:00
Luke Parker
119d25be49 Clarify transaction length sizing 2023-04-11 19:18:26 -04:00
Luke Parker
2cfee536f6 Define all coordinator transaction types 2023-04-11 19:04:53 -04:00
Luke Parker
90f67b5e54 Slight merkle improvements 2023-04-11 19:04:27 -04:00
Luke Parker
4d17b922fe Sign the genesis when signing transactions
Prevents replaying across tributaries, which is a risk for BTC/ETH (regarding key gen).
2023-04-11 19:03:52 -04:00
Luke Parker
7488d23e0d Add basic transaction/block code to Tributary 2023-04-11 13:42:18 -04:00
Luke Parker
a290b74805 Tweak processor's slice handling due to a CI failure
The prior code worked without issue for me locally, but apparently it didn't
always.
2023-04-11 10:37:50 -04:00
Luke Parker
61757d5e19 Remove the substrate feature from tendermint 2023-04-11 10:34:41 -04:00
Luke Parker
09f8ac37c4 Create a folder for tributary, the micro-blockchain
Moves tendermint again, this time under tributary.
2023-04-11 10:18:31 -04:00
Luke Parker
c46cf47736 Move tendermint under the coordinator
We're planning to use it in the micro-blockchain the coordinator will run.
2023-04-11 09:28:32 -04:00
Luke Parker
defce32ff1 Remove k256/p256 git revision patch
New releases of k256 and p256 make it no longer necessary.
2023-04-11 09:23:57 -04:00
Luke Parker
de52c4db7f Add empty coordinator 2023-04-11 09:21:35 -04:00
Luke Parker
d74cbe2cce Have the Scanner assign batch IDs 2023-04-11 08:47:15 -04:00
Luke Parker
caa695511b Improve log statements in processor 2023-04-11 06:06:17 -04:00
Luke Parker
7538c10159 Update processor README 2023-04-11 05:53:19 -04:00
Luke Parker
90f2b03595 Finish routing eventualities
Also corrects some misc TODOs and tidies up some log statements.
2023-04-11 05:49:27 -04:00
Luke Parker
9e78c8fc9e Test the processor's Substrate signer 2023-04-10 12:48:48 -04:00
Luke Parker
d323fc8b7b Handle signing batches in the processor
Duplicates the existing signer for one tailored to batch signing.
2023-04-10 11:11:46 -04:00
Luke Parker
82c34dcc76 Implement a FROST variant of Schnorrkel (#274)
* Minor lint

* Update frost-schnorrkel to the latest modular-frost

* Tidy up the schnorrkel library
2023-04-10 06:05:17 -04:00
Luke Parker
bc19975a8a Update Bitcoin confirmations from 3 to 6
While Bitcoin practically doesn't have long re-orgs, it is possible for a
single miner to build a long chain. Recently, a miner found 5 blocks in a row,
which would be enough to re-org a transaction Serai considered finalized.
2023-04-10 02:51:44 -04:00
Luke Parker
b9f38fb354 Update processor message flow around the new SignedBatch flow 2023-04-10 02:51:36 -04:00
Luke Parker
ccec529cee cargo update
Removes yanked crate.
2023-04-10 00:27:17 -04:00
Luke Parker
1c31ca7187 Correct deny.toml 2023-04-09 02:34:31 -04:00
Luke Parker
f6206b60ec Update to bitcoin 0.30
Also performs a general update with a variety of upgraded Substrate depends.
2023-04-09 02:31:13 -04:00
Luke Parker
96525330c2 cargo update 2023-04-08 04:44:28 -04:00
Luke Parker
7abc8f19cd Move substrate/serai/* to substrate/* 2023-04-08 03:01:14 -04:00
GitHub Actions
bd06b95c05 Update nightly 2023-04-01 05:44:42 -04:00
Luke Parker
648d237df5 Finish updating to the latest Rust/handle broken cargo update 2023-04-01 05:44:18 -04:00
Luke Parker
3f4bab7f7b cargo update
Resolves yanked crate.
2023-03-31 23:15:32 -04:00
Luke Parker
426346dd5a Have the processor DKG output a Ristretto key
This will be used to sign InInstructions.
2023-03-31 10:15:07 -04:00
Luke Parker
a4f64e2651 Expand work done in provide_batch 2023-03-31 08:13:45 -04:00
Luke Parker
6fa405a728 Update Monero README 2023-03-31 07:02:57 -04:00
Luke Parker
ae4e98c052 Verify Batch signatures
Starts further fleshing out the Serai client tests with common utils.
2023-03-31 06:34:09 -04:00
Luke Parker
30b8636641 Update to the latest Substrate commit
Enables building with only the stable toolchain. The nightly toolchain is still
used for clippy in order to access additional checks.
2023-03-31 02:34:52 -04:00
Luke Parker
1610383649 Test validator set's voting on a key
Needed for the in-instructions pallet to verify in-instructions are
appropriately signed and continue developing that.

Fixes a bug in the validator-sets pallet, moves several items from the pallet
to primitives.
2023-03-30 20:32:05 -04:00
Luke Parker
9615caf3bb Move validator-sets from Key to (RistrettoPublic, Key)
Part of #241.
2023-03-30 20:32:05 -04:00
Luke Parker
8a70416fd0 Attempt to the fix Bitcoin CI's cache statement 2023-03-28 07:33:11 -04:00
Luke Parker
4f28a38ce1 Implement #212 2023-03-28 05:34:21 -04:00
Luke Parker
47be373eb0 Resolve #268 by adding a Zeroize to DigestTranscript which writes a full block
This is a 'better-than-nothing' attempt to invalidate its state.

Also replaces black_box features with usage of the rustversion crate.
2023-03-28 04:43:10 -04:00
Luke Parker
79aff5d4c8 ff 0.13 (#269)
* Partial move to ff 0.13

It turns out the newly released k256 0.12 isn't on ff 0.13, preventing further
work at this time.

* Update all crates to work on ff 0.13

The provided curves still need to be expanded to fit the new API.

* Finish adding dalek-ff-group ff 0.13 constants

* Correct FieldElement::product definition

Also stops exporting macros.

* Test most new parts of ff 0.13

* Additionally test ff-group-tests with BLS12-381 and the pasta curves

We only tested curves from RustCrypto. Now we test a curve offered by zk-crypto,
the group behind ff/group, and the pasta curves, which is by Zcash (though
Zcash developers are also behind zk-crypto).

* Finish Ed448

Fully specifies all constants, passes all tests in ff-group-tests, and finishes moving to ff-0.13.

* Add RustCrypto/elliptic-curves to allowed git repos

Needed due to k256/p256 incorrectly defining product.

* Finish writing ff 0.13 tests

* Add additional comments to dalek

* Further comments

* Update ethereum-serai to ff 0.13
2023-03-28 04:38:01 -04:00
Luke Parker
a9f6300e86 Remove unused dependencies from runtime/node 2023-03-26 23:10:16 -04:00
Luke Parker
ff70cbb223 Remove the genesis volume added for Tendermint 2023-03-26 20:20:32 -04:00
Luke Parker
17818c2a02 Default to the wasm executor
https://github.com/paritytech/substrate/issues/ 10579 has the rationale for this.
2023-03-26 18:57:49 -04:00
Luke Parker
aea6ac104f Remove Tendermint for GRANDPA
Updates to polkadot-v0.9.40, with a variety of dependency updates accordingly.
Substrate thankfully now uses k256 0.13, pathing the way for #256. We couldn't
upgrade to polkadot-v0.9.40 without this due to polkadot-v0.9.40 having
fundamental changes to syncing. While we could've updated tendermint, it's not
worth the continued development effort given its inability to work with
multiple validator sets.

Purges sc-tendermint. Keeps tendermint-machine for #163.

Closes #137, #148, #157, #171. #96 and #99 should be re-scoped/clarified. #134
and #159 also should be clarified. #169 is also no longer a priority since
we're only considering temporal deployments of tendermint. #170 also isn't
since we're looking at effectively sharded validator sets, so there should
be no singular large set needing high performance.
2023-03-26 16:49:18 -04:00
Luke Parker
534e1bb11d Fix Monero's Extra::fee_weight and handling of data limits 2023-03-26 03:43:51 -04:00
Luke Parker
c182b804bc Move in instructions from inherent transactions to unsigned transactions
The original intent was to use inherent transactions to prevent needing to vote
on-chain, which would spam the chain with worthless votes. Inherent
transactions, and our Tendermint library, would use the BFT's processs voting
to also vote on all included transactions. This perfectly collapses integrity
voting creating *no additional on-chain costs*.

Unfortunately, this led to issues such as #6, along with questions of validator
scalability when all validators are expencted to participate in consensus (in
order to vote on if the included instructions are valid). This has been
summarized in #241.

With this change, we can remove Tendermint from Substrate. This greatly
decreases our complexity. While I'm unhappy with the amount of time spent on
it, just to reach this conclusion, thankfully tendermint-machine itself is
still usable for #163. This also has reached a tipping point recently as the
polkadot-v0.9.40 branch of substrate changed how syncing works, requiring
further changes to sc-tendermint. These have no value if we're just going to
get rid of it later, due to fundamental design issues, yet I would like to
keep Substrate updated.

This should be followed by moving back to GRANDPA, enabling closing most open
Tendermint issues.

Please note the current in-instructions-pallet does not actually verify the
included signature yet. It's marked TODO, despite this bing critical.
2023-03-26 02:58:04 -04:00
Luke Parker
9157f8d0a0 Update procesor/correct prior commit 2023-03-25 04:06:25 -04:00
Luke Parker
839734354a Update Getting Started 2023-03-25 01:44:07 -04:00
Luke Parker
d954e67238 Ensure InInstruction data is properly limited
Bitcoin didn't check, assuming data was <= 80 bytes thanks to being in
OP_RETURN. An additional global check has been added.
2023-03-25 01:36:28 -04:00
Luke Parker
6a981dae6e Make Validator Set Network a first-class property
There already should only be one validator set operating per network. This
formalizes that. Then, validator sets used to be able to operate over multiple
networks. That is no longer possible.

This formalization increases validator set flexibility while also allowing the
ability to formalize the definiton of tokens (which is necessary to define a
gas asset).
2023-03-25 01:30:53 -04:00
Luke Parker
397d79040c Update monero-serai to limit the size of TX extra 2023-03-25 01:26:42 -04:00
Luke Parker
293731f739 cargo update 2023-03-25 00:45:33 -04:00
Luke Parker
8447021ba1 Add a way to check if blocks completed eventualities 2023-03-22 22:45:41 -04:00
Luke Parker
11a0803ea5 Make the bitcoin Algorithm test a unit test 2023-03-21 18:50:23 -04:00
Luke Parker
d58a7b0ebf cargo fmt 2023-03-20 20:43:52 -04:00
Luke Parker
952cf280c2 Bump crate versions 2023-03-20 20:34:41 -04:00
Luke Parker
8d4d630e0f Fully document crypto/ 2023-03-20 20:10:00 -04:00
Luke Parker
e1bb2c191b Correct audit file upload
Git had replaced the 'line endings'.
2023-03-20 17:35:45 -04:00
Luke Parker
df2bb79a53 Clarify further changes have not been audited 2023-03-20 16:24:04 -04:00
Luke Parker
515587406f Finish testing bitcoin-serai 2023-03-20 05:47:07 -04:00
Luke Parker
7fc8630d39 Test bitcoin-serai
Also resolves a few rough edges.
2023-03-20 04:46:27 -04:00
Luke Parker
6a2a353b91 cargo fmt 2023-03-20 01:12:09 -04:00
Luke Parker
66eaf6ab61 Remove the code for the CI to spawn a Serai node
The serai-client test runner controls the node on its end.

Also bumps the Monero version.
2023-03-20 01:07:43 -04:00
Luke Parker
597122b2e0 Add a Scanner to bitcoin-serai
Moves the processor to it. This ends up as a net-neutral LoC change to the
processor, unfortunately, yet this makes bitcoin-serai safer/easier to use, and
increases the processor's usage of bitcoin-serai.

Also re-organizes bitcoin-serai a bit.
2023-03-20 01:03:39 -04:00
Luke Parker
0aa6b561b7 Bitcoin SpendableOutput::new 2023-03-19 23:22:56 -04:00
Luke Parker
60ca3a9599 Have SeraiError::RpcError include the subxt Error
Part of debugging #262.
2023-03-19 22:02:30 -04:00
Luke Parker
59891594aa Fix processor's determionation of protocol to support integration tests
I'm really unhappy with a cfg(test) within the codebase. The double checking of
it makes it tolerable though, especially when compared to dropping these tests.
2023-03-19 21:05:13 -04:00
Luke Parker
2fdf8f8285 Remove unused import 2023-03-17 23:59:46 -04:00
Luke Parker
55e0253225 Again tweak timeouts for #260 2023-03-17 23:45:46 -04:00
Luke Parker
918cce3494 Add a proper error to Bitcoin's SignableTransaction::new
Also adds documentation to various parts of bitcoin.
2023-03-17 23:43:32 -04:00
Luke Parker
6ac570365f Fix #260
The issue was the 10s timeouts were too fast for the CI runner, since the
Scanner only polls every five seconds (already cutting into the window).
2023-03-17 21:38:09 -04:00
Luke Parker
0525ba2f62 Document Bitcoin RPC and make it more robust 2023-03-17 21:25:38 -04:00
Luke Parker
9b47ad56bb Create a dedicated Algorithm for Bitcoin Schnorr 2023-03-17 20:47:42 -04:00
Luke Parker
5e771b1bea Run bitcoin as daemon 2023-03-17 15:32:05 -04:00
Luke Parker
9952c67d98 Update crypto-bigint to 0.5 2023-03-17 15:31:04 -04:00
Luke Parker
f2218b4d4e Attempt to fix Bitcoin node CI 2023-03-17 13:37:18 -04:00
Luke Parker
780b79c3d8 Properly run processor Monero tests
Since it wasn't being compiled with the Monero feature, it wasn't running the
Monero tests.
2023-03-17 13:33:50 -04:00
Luke Parker
ba82dac18c Processor (#259)
* Initial work on a message box

* Finish message-box (untested)

* Expand documentation

* Embed the recipient in the signature challenge

Prevents a message from A -> B from being read as from A -> C.

* Update documentation by bifurcating sender/receiver

* Panic on receiving an invalid signature

If we've received an invalid signature in an authenticated system, a 
service is malicious, critically faulty (equivalent to malicious), or 
the message layer has been compromised (or is otherwise critically 
faulty).

Please note a receiver who handles a message they shouldn't will trigger 
this. That falls under being critically faulty.

* Documentation and helper methods

SecureMessage::new and SecureMessage::serialize.

Secure Debug for MessageBox.

* Have SecureMessage not be serialized by default

Allows passing around in-memory, if desired, and moves the error from 
decrypt to new (which performs deserialization).

Decrypt no longer has an error since it panics if given an invalid 
signature, due to this being intranet code.

* Explain and improve nonce handling

Includes a missing zeroize call.

* Rebase to latest develop

Updates to transcript 0.2.0.

* Add a test for the MessageBox

* Export PrivateKey and PublicKey

* Also test serialization

* Add a key_gen binary to message_box

* Have SecureMessage support Serde

* Add encrypt_to_bytes and decrypt_from_bytes

* Support String ser via base64

* Rename encrypt/decrypt to encrypt_bytes/decrypt_to_bytes

* Directly operate with values supporting Borsh

* Use bincode instead of Borsh

By staying inside of serde, we'll support many more structs. While 
bincode isn't canonical, we don't need canonicity on an authenticated, 
internal system.

* Turn PrivateKey, PublicKey into structs

Uses Zeroizing for the PrivateKey per #150.

* from_string functions intended for loading from an env

* Use &str for PublicKey from_string (now from_str)

The PrivateKey takes the String to take ownership of its memory and 
zeroize it. That isn't needed with PublicKeys.

* Finish updating from develop

* Resolve warning

* Use ZeroizingAlloc on the key_gen binary

* Move message-box from crypto/ to common/

* Move key serialization functions to ser

* add/remove functions in MessageBox

* Implement Hash on dalek_ff_group Points

* Make MessageBox generic to its key

Exposes a &'static str variant for internal use and a RistrettoPoint 
variant for external use.

* Add Private to_string as deprecated

Stub before more competent tooling is deployed.

* Private to_public

* Test both Internal and External MessageBox, only use PublicKey in the pub API

* Remove panics on invalid signatures

Leftover from when this was solely internal which is now unsafe.

* Chicken scratch a Scanner task

* Add a write function to the DKG library

Enables writing directly to a file.

Also modifies serialize to return Zeroizing<Vec<u8>> instead of just Vec<u8>.

* Make dkg::encryption pub

* Remove encryption from MessageBox

* Use a 64-bit block number in Substrate

We use a 64-bit block number in general since u32 only works for 120 years
(with a 1 second block time). As some chains even push the 1 second threshold,
especially ones based on DAG consensus, this becomes potentially as low as 60
years.

While that should still be plenty, it's not worth wondering/debating. Since
Serai uses 64-bit block numbers elsewhere, this ensures consistency.

* Misc crypto lints

* Get the scanner scratch to compile

* Initial scanner test

* First few lines of scheduler

* Further work on scheduler, solidify API

* Define Scheduler TX format

* Branch creation algorithm

* Document when the branch algorithm isn't perfect

* Only scanned confirmed blocks

* Document Coin

* Remove Canonical/ChainNumber from processor

The processor should be abstracted from canonical numbers thanks to the
coordinator, making this unnecessary.

* Add README documenting processor flow

* Use Zeroize on substrate primitives

* Define messages from/to the processor

* Correct over-specified versioning

* Correct build re: in_instructions::primitives

* Debug/some serde in crypto/

* Use a struct for ValidatorSetInstance

* Add a processor key_gen task

Redos DB handling code.

* Replace trait + impl with wrapper struct

* Add a key confirmation flow to the key gen task

* Document concerns on key_gen

* Start on a signer task

* Add Send to FROST traits

* Move processor lib.rs to main.rs

Adds a dummy main to reduce clippy dead_code warnings.

* Further flesh out main.rs

* Move the DB trait to AsRef<[u8]>

* Signer task

* Remove a panic in bitcoin when there's insufficient funds

Unchecked underflow.

* Have Monero's mine_block mine one block, not 10

It was initially a nicety to deal with the 10 block lock. C::CONFIRMATIONS
should be used for that instead.

* Test signer

* Replace channel expects with log statements

The expects weren't problematic and had nicer code. They just clutter test
output.

* Remove the old wallet file

It predates the coordinator design and shouldn't be used.

* Rename tests/scan.rs to tests/scanner.rs

* Add a wallet test

Complements the recently removed wallet file by adding a test for the scanner,
scheduler, and signer together.

* Work on a run function

Triggers a clippy ICE.

* Resolve clippy ICE

The issue was the non-fully specified lambda in signer.

* Add KeyGenEvent and KeyGenOrder

Needed so we get KeyConfirmed messages from the key gen task.

While we could've read the CoordinatorMessage to see that, routing through the
key gen tasks ensures we only handle it once it's been successfully saved to
disk.

* Expand scanner test

* Clarify processor documentation

* Have the Scanner load keys on boot/save outputs to disk

* Use Vec<u8> for Block ID

Much more flexible.

* Panic if we see the same output multiple times

* Have the Scanner DB mark itself as corrupt when doing a multi-put

This REALLY should be a TX. Since we don't have a TX API right now, this at
least offers detection.

* Have DST'd DB keys accept AsRef<[u8]>

* Restore polling all signers

Writes a custom future to do so.

Also loads signers on boot using what the scanner claims are active keys.

* Schedule OutInstructions

Adds a data field to Payment.

Also cleans some dead code.

* Panic if we create an invalid transaction

Saves the TX once it's successfully signed so if we do panic, we have a copy.

* Route coordinator messages to their respective signer

Requires adding key to the SignId.

* Send SignTransaction orders for all plans

* Add a timer to retry sign_plans when prepare_send fails

* Minor fmt'ing

* Basic Fee API

* Move the change key into Plan

* Properly route activation_number

* Remove ScannerEvent::Block

It's not used under current designs

* Nicen logs

* Add utilities to get a block's number

* Have main issue AckBlock

Also has a few misc lints.

* Parse instructions out of outputs

* Tweak TODOs and remove an unwrap

* Update Bitcoin max input/output quantity

* Only read one piece of data from Monero

Due to output randomization, it's infeasible.

* Embed plan IDs into the TXs they create

We need to stop attempting signing if we've already signed a protocol. Ideally,
any one of the participating signers should be able to provide a proof the TX
was successfully signed. We can't just run a second signing protocol though as
a single malicious signer could complete the TX signature, and publish it,
yet not complete the secondary signature.

The TX itself has to be sufficient to show that the TX matches the plan. This
is done by embedding the ID, so matching addresses/amounts plans are
distinguished, and by allowing verification a TX actually matches a set of
addresses/amounts.

For Monero, this will need augmenting with the ephemeral keys (or usage of a
static seed for them).

* Don't use OP_RETURN to encode the plan ID on Bitcoin

We can use the inputs to distinguih identical-output plans without issue.

* Update OP_RETURN data access

It's not required to be the last output.

* Add Eventualities to Monero

An Eventuality is an effective equivalent to a SignableTransaction. That is
declared not by the inputs it spends, yet the outputs it creates.
Eventualities are also bound to a 32-byte RNG seed, enabling usage of a
hash-based identifier in a SignableTransaction, allowing multiple
SignableTransactions with the same output set to have different Eventualities.

In order to prevent triggering the burning bug, the RNG seed is hashed with
the planned-to-be-used inputs' output keys. While this does bind to them, it's
only loosely bound. The TX actually created may use different inputs entirely
if a forgery is crafted (which requires no brute forcing).

Binding to the key images would provide a strong binding, yet would require
knowing the key images, which requires active communication with the spend
key.

The purpose of this is so a multisig can identify if a Transaction the entire
group planned has been executed by a subset of the group or not. Once a plan
is created, it can have an Eventuality made. The Eventuality's extra is able
to be inserted into a HashMap, so all new on-chain transactions can be
trivially checked as potential candidates. Once a potential candidate is found,
a check involving ECC ops can be performed.

While this is arguably a DoS vector, the underlying Monero blockchain would
need to be spammed with transactions to trigger it. Accordingly, it becomes
a Monero blockchain DoS vector, when this code is written on the premise
of the Monero blockchain functioning. Accordingly, it is considered handled.

If a forgery does match, it must have created the exact same outputs the
multisig would've. Accordingly, it's argued the multisig shouldn't mind.

This entire suite of code is only necessary due to the lack of outgoing
view keys, yet it's able to avoid an interactive protocol to communicate
key images on every single received output.

While this could be locked to the multisig feature, there's no practical
benefit to doing so.

* Add support for encoding Monero address to instructions

* Move Serai's Monero address encoding into serai-client

serai-client is meant to be a single library enabling using Serai. While it was
originally written as an RPC client for Serai, apps actually using Serai will
primarily be sending transactions on connected networks. Sending those
transactions require proper {In, Out}Instructions, including proper address
encoding.

Not only has address encoding been moved, yet the subxt client is now behind
a feature. coin integrations have their own features, which are on by default.
primitives are always exposed.

* Reorganize file layout a bit, add feature flags to processor

* Tidy up ETH Dockerfile

* Add Bitcoin address encoding

* Move Bitcoin::Address to serai-client's

* Comment where tweaking needs to happen

* Add an API to check if a plan was completed in a specific TX

This allows any participating signer to submit the TX ID to prevent further
signing attempts.

Also performs some API cleanup.

* Minimize FROST dependencies

* Use a seeded RNG for key gen

* Tweak keys from Key gen

* Test proper usage of Branch/Change addresses

Adds a more descriptive error to an error case in decoys, and pads Monero
payments as needed.

* Also test spending the change output

* Add queued_plans to the Scheduler

queued_plans is for payments to be issued when an amount appears, yet the
amount is currently pre-fee. One the output is actually created, the
Scheduler should be notified of the amount it was created with, moving from
queued_plans to plans under the actual amount.

Also tightens debug_asserts to asserts for invariants which may are at risk of
being exclusive to prod.

* Add missing tweak_keys call

* Correct decoy selection height handling

* Add a few log statements to the scheduler

* Simplify test's get_block_number

* Simplify, while making more robust, branch address handling in Scheduler

* Have fees deducted from payments

Corrects Monero's handling of fees when there's no change address.

Adds a DUST variable, as needed due to 1_00_000_000 not being enough to pay
its fee on Monero.

* Add comment to Monero

* Consolidate BTC/XMR prepare_send code

These aren't fully consolidated. We'd need a SignableTransaction trait for
that. This is a lot cleaner though.

* Ban integrated addresses

The reasoning why is accordingly documented.

* Tidy TODOs/dust handling

* Update README TODO

* Use a determinisitic protocol version in Monero

* Test rebuilt KeyGen machines function as expected

* Use a more robust KeyGen entropy system

* Add DB TXNs

Also load entropy from env

* Add a loop for processing messages from substrate

Allows detecting if we're behind, and if so, waiting to handle the message

* Set Monero MAX_INPUTS properly

The previous number was based on an old hard fork. With the ring size having
increased, transactions have since got larger.

* Distinguish TODOs into TODO and TODO2s

TODO2s are for after protonet

* Zeroize secret share repr in ThresholdCore write

* Work on Eventualities

Adds serialization and stops signing when an eventuality is proven.

* Use a more robust DB key schema

* Update to {k, p}256 0.12

* cargo +nightly clippy

* cargo update

* Slight message-box tweaks

* Update to recent Monero merge

* Add a Coordinator trait for communication with coordinator

* Remove KeyGenHandle for just KeyGen

While KeyGen previously accepted instructions over a channel, this breaks the
ack flow needed for coordinator communication. Now, KeyGen is the direct object
with a handle() function for messages.

Thankfully, this ended up being rather trivial for KeyGen as it has no
background tasks.

* Add a handle function to Signer

Enables determining when it's finished handling a CoordinatorMessage and
therefore creating an acknowledgement.

* Save transactions used to complete eventualities

* Use a more intelligent sleep in the signer

* Emit SignedTransaction with the first ID *we can still get from our node*

* Move Substrate message handling into the new coordinator recv loop

* Add handle function to Scanner

* Remove the plans timer

Enables ensuring the ordring on the handling of plans.

* Remove the outputs function which panicked if a precondition wasn't met

The new API only returns outputs upon satisfaction of the precondition.

* Convert SignerOrder::SignTransaction to a function

* Remove the key_gen object from sign_plans

* Refactor out get_fee/prepare_send into dedicated functions

* Save plans being signed to the DB

* Reload transactions being signed on boot

* Stop reloading TXs being signed (and report it to peers)

* Remove message-box from the processor branch

We don't use it here yet.

* cargo +nightly fmt

* Move back common/zalloc

* Update subxt to 0.27

* Zeroize ^1.5, not 1

* Update GitHub workflow

* Remove usage of SignId in completed
2023-03-16 22:59:40 -04:00
Luke Parker
f374cd7398 Update to ethers 2 2023-03-16 20:16:57 -04:00
Luke Parker
ab1e5c372e Don't use a relative link to link to the audit 2023-03-16 19:49:36 -04:00
Luke Parker
67da08705e cargo update 2023-03-16 19:37:32 -04:00
Luke Parker
0d4b66dc2a Bump package versions 2023-03-16 19:29:22 -04:00
Luke Parker
4ed819fc7d Document crypto crates with audit notices 2023-03-16 19:25:01 -04:00
Luke Parker
74924095e1 Add Cypher Stack's audit of /crypto
The current /crypto folder, as of this commit, is identical, except for the
years in the copyright statements. GitHub's CI also passed for the previous
commit, ensuring the repo's integrity during that commit. This now establishes
trust for /crypto, which will be used to update documentation and create
releases.
2023-03-16 18:38:31 -04:00
Luke Parker
d2c1592c61 Resolve merging crypto-{audit, tweaks} and use the proper transcript in Bitcoin 2023-03-16 16:59:20 -04:00
Luke Parker
64924835ad Merge pull request #254 from serai-dex/nightly-2023-03
March 2023 - Rust Nightly Update
2023-03-16 16:45:19 -04:00
Luke Parker
37e4f2cc50 Merge pull request #255 from serai-dex/crypto-tweaks
Crypto audit/tweaks
2023-03-16 16:43:27 -04:00
Luke Parker
caf37527eb Merge branch 'develop' into crypto-tweaks 2023-03-16 16:43:04 -04:00
Luke Parker
669d2dbffc 3.10.2 Explicitly test RecommendedTranscript 2023-03-15 19:55:07 -04:00
Luke Parker
f4e2da2767 Move where we check the Monero node's protocol
The genesis block has a version of 1, so immediately checking (before new
blocks are added) will cause failures.
2023-03-14 01:54:08 -04:00
Luke Parker
48078d0b4b Remove Protocol::Unsupported 2023-03-13 08:03:13 -04:00
Luke Parker
0e0243639e Resolve clippy error
This was resolved on the processor branch yet not on develop.
2023-03-13 07:57:45 -04:00
Luke Parker
14203bbb46 Use an async Mutex for the Monero distribution
Enables safe async/thread-safe usage.
2023-03-12 04:13:43 -04:00
Luke Parker
f5fa6f020d Remove note about adding in a DB handle
It'd arguably be safer yet it isn't worth the API complexity.
2023-03-12 03:55:17 -04:00
Luke Parker
41a285ddfa Add a TX size check to Monero
This isn't perfect yet should ensure the eventual TX is less than 100k bytes.
2023-03-12 03:54:30 -04:00
Luke Parker
36034c2f72 Move ecdh derivation up to prevent Scalar::one() * ecdh 2023-03-11 10:51:40 -05:00
Luke Parker
5e62072a0f Fix #237 2023-03-11 10:31:58 -05:00
Luke Parker
e56495d624 Prefix arbitrary data with 127
Since we cannot expect/guarantee a payment ID will be included, the previous
position-based code for determining arbitrary data wasn't sufficient.
2023-03-11 05:47:25 -05:00
Luke Parker
71dbc798b5 Fix #251 2023-03-11 05:23:38 -05:00
Luke Parker
4335baa43f cargo update 2023-03-11 04:49:05 -05:00
akildemir
77de28f77a add monero seed support (#252)
* add monero seed support

* fix some of the pr comments

* remove languages module and unnecessary error returns

* Clean classic seed impl

Fixes a few issues regarding Zeroize usage/API safety. Mainly a cleanup.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-03-10 14:16:00 -05:00
Luke Parker
ad470bc969 \#242 Expand usage of black_box/zeroize
This commit greatly expands the usage of black_box/zeroize on bits, as it
originally should have. It is likely overkill, leading to less efficient
code generation, yet does its best to be comprehensive where comprehensiveness
is extremely annoying to achieve.

In the future, this usage of black_box may be desirable to move to its own
crate.

Credit to @AaronFeickert for identifying the original commit was incomplete.
2023-03-10 06:27:44 -05:00
Luke Parker
62dfc63532 Fix Ethereum, again 2023-03-07 06:25:21 -05:00
Luke Parker
1e201562df Correct doc comments re: HTML tags 2023-03-07 05:34:29 -05:00
Luke Parker
11114dcb74 Further fix the clippy lint controls for Hash on dalek_ff_group::*Point 2023-03-07 05:31:02 -05:00
Luke Parker
837c776297 Make Schnorr modular to its transcript 2023-03-07 05:30:21 -05:00
Luke Parker
6bff3866ea Correct Ethereum 2023-03-07 05:25:25 -05:00
Luke Parker
b0730e3fdf Fix last commit again 2023-03-07 04:47:06 -05:00
Luke Parker
2e78d61752 Fix last commit 2023-03-07 04:39:15 -05:00
Luke Parker
0b8a4ab3d0 Use a backwards compatible clippy lint for impl Hash 2023-03-07 04:26:19 -05:00
Luke Parker
c358090f16 Use black_box to help obscure the dalek-ff-group bool -> Choice conversion
I have no idea if this will actually help, yet it can't hurt.

Feature gated due to MSRV requirements.

Fixes #242.
2023-03-07 04:23:41 -05:00
Luke Parker
adb5f34fda Merge branch 'crypto-audit' into crypto-tweaks 2023-03-07 04:08:34 -05:00
Luke Parker
ed056cceaf 3.5.2 Test non-canonical from_repr
Unfortunately, G::from_bytes doesn't require canonicity so that still can't
be properly tested for. While we could try to detect SEC1, and write tests
on that, there's not a suitably stable/wide enough solution to be worth it.
2023-03-07 04:05:56 -05:00
Luke Parker
2bad06e5d9 Fix #200 2023-03-07 03:55:58 -05:00
Luke Parker
5a9a42f025 Use variable time for verifying PoKs in the DKG 2023-03-07 03:48:16 -05:00
Luke Parker
7d12c785b7 Correct error comment in ff-group-tests 2023-03-07 03:46:55 -05:00
Luke Parker
e08adcc1ac Have Ciphersuite re-export Group 2023-03-07 03:46:16 -05:00
Luke Parker
af5702fccd Make encryption public
It's necessary in order to read encryption messages over the network.
2023-03-07 03:37:30 -05:00
Luke Parker
5037962d3c Rename dkg serialize/deserialize to write/read 2023-03-07 03:37:25 -05:00
Luke Parker
5b26115f81 Add Debug implementations to dkg 2023-03-07 03:26:39 -05:00
Luke Parker
1a99629a4a Add feature-gated serde support for Participant/ThresholdParams
These don't have secret data yet sometimes have value to be communicated.
2023-03-07 03:13:55 -05:00
Luke Parker
b1ea2dfba6 Add support for hashing (as in HashMap) dalek points 2023-03-07 03:10:55 -05:00
Luke Parker
0e8c55e050 Update and remove unused dependencies 2023-03-07 03:06:46 -05:00
Luke Parker
d36fc026dd Remove unused generic in frost 2023-03-07 02:40:09 -05:00
Luke Parker
0bbf511062 Add 'static/Send/Sync to specific traits in crypto
These were proven necessary by our real world usage.
2023-03-07 02:38:47 -05:00
Luke Parker
2729882d65 Update to {k, p}256 0.12 2023-03-07 02:34:10 -05:00
Luke Parker
c37cc0b4e2 Update Zeroize pin to ^1.5 from 1.5 2023-03-07 02:29:59 -05:00
Luke Parker
a053454ae4 3.9.4 Add tests to the transcript crate 2023-03-07 02:25:10 -05:00
Luke Parker
20a33079f8 3.9.3 Document Merlin domain_separate conflict potential and add an asert 2023-03-06 20:16:57 -05:00
Luke Parker
8307d4f6c8 cargo fmt 2023-03-06 08:23:14 -05:00
Luke Parker
db1fefe7c1 Update tendermint/node to latest substrate 2023-03-06 08:20:01 -05:00
Luke Parker
4a81640ab8 Update runtime to latest substrate 2023-03-06 08:14:22 -05:00
Luke Parker
943438628d cargo update 2023-03-06 07:39:43 -05:00
Luke Parker
7efedb9a91 3.9.1 Also correct invalid doc comment 2023-03-06 07:16:04 -05:00
Luke Parker
79124b9a33 3.9.2 Better document rng_seed is allowed to conflict with challenge 2023-03-02 11:19:26 -05:00
Luke Parker
6fec95b1a7 3.7.2 Remove code randomizing which side odd elements end up on
This could still be gamed. For [1, 2, 3], the options were ([1], [2, 3]) or
([1, 2], [3]). This means 2 would always have the maximum round count, and
thus this is still game-able. There's no point to keeping its complexity
accordingly when the algorithm is as efficient as it is.

While a proper random could be used to satisfy 3.7.2, it'd break the
expected determinism.
2023-03-02 11:16:00 -05:00
Luke Parker
2f4f1de488 3.9.1 Fix SecureDigest trait bound 2023-03-02 10:57:22 -05:00
Luke Parker
97374a3e24 3.8.6 Correct transcript to scalar derivation
Replaces the externally passed in Digest with C::H since C is available.
2023-03-02 10:04:18 -05:00
Luke Parker
530671795a 3.8.5 Let the caller pass in a DST for the aggregation hash function
Also moves the aggregator over to Digest. While a bit verbose for this context,
as all appended items were fixed length, it's length prefixing is solid and
the API is pleasant. The downside is the additional dependency which is
in tree and quite compact.
2023-03-02 09:29:37 -05:00
Luke Parker
8b7e7b1a1c 3.8.4 Don't additionally transcript keys with challenges 2023-03-02 09:14:36 -05:00
Luke Parker
053f07a281 3.8.3 Document challenge requirements 2023-03-02 09:08:53 -05:00
Luke Parker
08f9287107 3.8.2 Add Ed25519 RFC 8032 test vectors 2023-03-02 09:06:03 -05:00
Luke Parker
35043d2889 3.8.1 Document RFC 8032 compatibility 2023-03-02 08:45:09 -05:00
Luke Parker
1d2ebdca62 3.7.6, 3.7.7 Optimize multiexp implementations 2023-03-02 06:12:02 -05:00
Luke Parker
e5329b42e6 3.7.5 Further document multiexp functions 2023-03-02 05:49:45 -05:00
Luke Parker
8144956f8a Further document get_unlocked_outputs 2023-03-02 05:31:22 -05:00
Luke Parker
408494f8de 3.7.4 Always zeroize multiexp data
This was handled as part of handling 3.7.1
2023-03-02 03:59:49 -05:00
Luke Parker
8661111fc6 3.7.3 Add multiexp tests 2023-03-02 03:58:48 -05:00
Luke Parker
93d5f41917 3.7.2 Randomize which side odd elements end up on during blame 2023-03-02 01:55:08 -05:00
Luke Parker
15d6be1678 3.7.1 Deduplicate flattening/zeroize code
While the prior intent was to avoid zeroizing for vartime verification, which
is assumed to not have any private data, this simplifies the code and promotes
safety.
2023-03-02 01:13:07 -05:00
Luke Parker
2fd5cd8161 3.6.9 Add several tests to the FROST library
Offset signing is now tested. Multi-nonce algorithms are now tested.
Multi-generator nonce algorithms are now tested. More fault cases are now tested
as well.
2023-03-01 08:02:45 -05:00
Luke Parker
c6284b85a4 3.6.8 Simplify offset splitting
This wasn't done prior to be 'leaderless', as now the participant with the
lowest ID has an extra step, yet this is still trivial. There's also notable
performance benefits to not taking the previous dividing approach, which
performed an exp.
2023-03-01 01:06:13 -05:00
Luke Parker
a42a84e1e8 3.6.7 Seal IetfTranscript 2023-03-01 00:42:01 -05:00
Luke Parker
5a3406bb5f 3.6.6 Further document nonces
This was already a largely documented file. While the terminology is
potentially ambiguous, there's not a clearer path perceived at this time.
2023-03-01 00:35:37 -05:00
Luke Parker
62b3036cbd 3.6.5 Document origin of vectors 2023-02-28 23:23:22 -05:00
Luke Parker
6a15b21949 3.6.4 Document inclusion of Ed448 HRAM vector 2023-02-28 23:14:59 -05:00
Luke Parker
39b3452da1 3.6.3 Check commitment encoding
Also extends tests of 3.6.2 and so on.
2023-02-28 21:57:18 -05:00
Luke Parker
7a05466049 3.6.2 Test nonce generation
There's two ways which this could be tested.

1) Preprocess not taking in an arbitrary RNG item, yet the relevant bytes

This would be an unsafe level of refactoring, in my opinion.

2) Test random_nonce and test the passed in RNG eventually ends up at
random_nonce.

This takes the latter route, both verifying random_nonce meets the vectors
and that the FROST machine calls random_nonce properly.
2023-02-28 21:02:12 -05:00
GitHub Actions
6b5097f0c3 Update nightly 2023-03-01 01:50:10 +00:00
Luke Parker
c1435a2045 3.4.a Panic if generators.len() != scalars.len() for MultiDLEqProof 2023-02-28 00:00:29 -05:00
Luke Parker
969a5d94f2 3.6.1 Document rejection of zero nonces 2023-02-24 06:16:22 -05:00
Luke Parker
93f7afec8b 3.5.2 Add more tests to ff-group-tests
The audit recommends checking failure cases for from_bytes,
from_bytes_unechecked, and from_repr. This isn't feasible.

from_bytes is allowed to have non-canonical values. [0xff; 32] may accordingly
be a valid point for non-SEC1-encoded curves.

from_bytes_unchecked doesn't have a defined failure mode, and by name,
unchecked, shouldn't necessarily fail. The audit acknowledges the tests should
test for whatever result is 'appropriate', yet any result which isn't a failure
on a valid element is appropriate.

from_repr must be canonical, yet for a binary field of 2^n where n % 8 == 0, a
[0xff; n / 8] repr would be valid.
2023-02-24 06:03:56 -05:00
Luke Parker
32c18cac84 3.5.1 Document presence of k256/p256 2023-02-24 05:28:31 -05:00
Luke Parker
65376e93e5 3.4.3 Merge the nonce calculation from DLEqProof and MultiDLEqProof into a
single function

3.4.3 actually describes getting rid of DLEqProof for a thin wrapper around
MultiDLEqProof. That can't be done due to DLEqProof not requiring the std
features, enabling Vecs, which MultiDLEqProof relies on.

Merging the verification statement does simplify the code a bit. While merging
the proof could also be, it has much less value due to the simplicity of
proving (nonce * G, scalar * G).
2023-02-24 05:11:01 -05:00
Luke Parker
6104d606be 3.4.2 Document the DLEq lib 2023-02-24 04:37:20 -05:00
Luke Parker
1a6497f37a 3.3.5 Clarify GeneratorPromotion is only for generators, not curves 2023-02-23 07:21:47 -05:00
Luke Parker
4d6a0bbd7d 3.3.4 Use FROST context throughout Encryption 2023-02-23 07:19:55 -05:00
Luke Parker
2d56d24d9c 3.3.3 (cont) Add a dedicated Participant type 2023-02-23 06:50:45 -05:00
Luke Parker
87dea5e455 3.3.3 Add an assert if polynomial is called with 0
This will only be called with 0 if the code fails to do proper screening of its
arguments. If such a flaw is present, the DKG lib is critically broken (as this
function isn't public). If it was allowed to continue executing, it'd reveal
the secret share.
2023-02-23 04:56:05 -05:00
Luke Parker
8bee62609c 3.3.2 Use a static IV and clarify cipher documentation 2023-02-23 04:44:20 -05:00
Luke Parker
d72c4ca4f7 3.3.1 replace try_from with from 2023-02-23 04:29:38 -05:00
Luke Parker
d929a8d96e 3.2.2 Use a hash to point for random points in dfg 2023-02-23 04:29:17 -05:00
Luke Parker
74647b1b52 3.2.3 Don't yield identity in Group::random 2023-02-23 04:14:07 -05:00
Luke Parker
40a6672547 3.2.1, 3.2.4, 3.2.5. Documentation and tests 2023-02-23 04:05:47 -05:00
Luke Parker
686a5ee364 3.1.4 Further document hash_to_F which may collide 2023-02-23 01:09:22 -05:00
Luke Parker
cb4ce5e354 3.1.3 Use a checked_add for the modulus in secp256k1/P-256 2023-02-23 00:57:41 -05:00
Luke Parker
ac0f5e9b2d 3.1.2 Remove oversize DST handling for code present in elliptic-curve already
Adds a test to ensure that elliptic-curve does in fact handle this properly.
2023-02-23 00:52:13 -05:00
Luke Parker
18ac80671f 3.1.1 Document secp256k1/P-256 hash_to_F 2023-02-23 00:37:19 -05:00
Luke Parker
8260ec1a9e Add another TODO 2023-02-15 20:44:53 -05:00
Luke Parker
07f424b484 cargo fmt 2023-02-15 20:39:22 -05:00
Luke Parker
5de8bf3295 Add additional checks/documentation to monero 2023-02-15 01:56:36 -05:00
Luke Parker
82a096e90e Scanner assert on is_torsion_free 2023-02-14 15:49:16 -05:00
github-actions[bot]
c540f52dda Update nightly (#248)
Co-authored-by: GitHub Actions <>
2023-02-01 22:46:53 -05:00
Luke Parker
264174644f Further workaround #247 2023-01-31 10:48:19 -05:00
Luke Parker
df75782e54 Re-license bitcoin-serai to MIT
It's pretty basic code, yet would still be quite pleasant to the larger
community.

Also adds documentation.
2023-01-31 09:45:25 -05:00
Luke Parker
86ad947261 Add missing semicolon 2023-01-31 08:15:00 -05:00
Luke Parker
a3267034b6 Bitcoin External/Branch/Change addresses
Adds support for offset inputs to the Bitcoin lib
2023-01-31 08:10:28 -05:00
VRx
c6bd00e778 Bitcoin processor (#232)
* serai Dockerfile & Makefile fixed

* added new bitcoin mod & bitcoinhram

* couple changes

* added odd&even check for bitcoin signing

* sign message updated

* print_keys commented out

* fixed signing process

* Added new bitcoin library & added most of bitcoin processor logic

* added new crate and refactored the bitcoin coin library

* added signing test function

* moved signature.rs

* publish set to false

* tests moved back to the root

* added new functions to rpc

* added utxo test

* added new rpc methods and refactored bitcoin processor

* added spendable output & fixed errors & added new logic for sighash & opened port 18443 for bitcoin docker

* changed tweak keys

* added tweak_keys & publish transaction and refactored bitcoin processor

* added new structs and fixed problems for testing purposes

* reverted dockerfile back its original

* reverted block generation of bitcoin to 5 seconds

* deleted unnecessary test function

* added new sighash & added new dbg messages & fixed couple errors

* fixed couple issue & removed unused functions

* fix for signing process

* crypto file for bitcoin refactored

* disabled test_send & removed some of the debug logs

* signing implemented & transaction weight calculation added & change address logic added

* refactored tweak_keys

* refactored mine_block & fixed change_address logic

* implemented new traits to bitcoin processor& refactored bitcoin processor

* added new line to tests file

* added new line to bitcoin's wallet.rs

* deleted Cargo.toml from coins folder

* edited bitcoin's Cargo.toml and added LICENSE

* added new line to bitcoin's Cargo.toml

* added spaces

* added spaces

* deleted unnecessary object

* added spaces

* deleted patch numbers

* updated sha256 parameter for message

* updated tag as const

* deleted unnecessary brackets and imports

* updated rpc.rs to 2 space indent

* deleted unnecessary brackers

* deleted unnecessary brackets

* changed it to explicit

* updated to explicit

* deleted unnecessary parsing

* added ? for easy return

* updated imports

* updated height to number

* deleted unnecessary brackets

* updated clsag to sig & to_vec to as_ref

* updated _sig to schnorr_signature

* deleted unnecessary variable

* updated Cargo.toml of processor and bitcoin

* updated imports of bitcoin processor

* updated MBlock to BBlock

* updated MSignable to BSignable

* updated imports

* deleted mask from Fee

* updated get_block function return

* updated comparison logic for scripts

* updated assert to debug_assert

* updated height to number

* updated txid logic

* updated tweak_keys definition

* updated imports

* deleted new line

* delete HashMap from monero

* deleted old test code parts

* updated test amount to a round number

* changed the test code part back to its original

* updated imports of rpc.rs

* deleted unnecessary return assignments

* deleted get_fee_per_byte

* deleted create_raw_transaction

* deleted fund_raw_transaction

* deleted sign transaction rpc

* delete verify_message rpc

* deleted get_balance

* deleted decode_raw_transaction rpc

* deleted list_transactions rpc

* changed test_send to p2wpkh

* updated imports of test_send

* fixed imports of test_send

* updated bitcoin's mine_block function

* updated bitcoin's test_send

* updated bitcoin's hram and test_signing

* deleted 2 rpc function (is_confirmed & get_transaction_block_number)

* deleted get_raw_transaction_hex

* deleted get_raw_transaction_info

* deleted new_address

* deleted test_mempool_accept

* updated remove(0) to remove(index)

* deleted ger_raw_transaction

* deleted RawTx trait and converted type to Transaction

* reverted raw_hex feature back

* added NotEnoughFunds to CoinError

* changed Sighash to all

* removed lifetime of RpcParams

* changed pub to pub(crate) & changed sig_hash line

* changed taproot_key_spend_signature_hash to internal

* added Clone to RpcError & deleted get_utxo_for

* changed to_hex to as_bytes for weight calculation

* updated SpendableOutput

* deleted unnecessary parentheses

* updated serialize of Output s id field

* deleted unused crate & added lazy_static

* updated RPC init function

* added lazy_static for TAG_HASH & updated imported crates

* changed get_block_index to get_block_number

* deleted get_block_info

* updated get_height to get_latest_block_number

* removed GetBlockWithDetailResult and get_block_with_transactions

* deleted unnecessary imports from rpc_helper

* removed lock and unlock_unspent

* deleted get_transactions and get_transaction and renamed get_raw_transaction to get_transaction

* updated opt_into_json

* changed payment_address and amount to output_script and amount for transcript

* refactored error logic for rpc & deleted anyhow crate

* added a dedicated file for json helper functions

* refactored imports and deleted unused code

* added clippy::non_snake_case

* removed unused Error items

* added new line to Cargo

* rekmoved Block and used bitcoin::Block direcetly

* removed added println and futures.len check

* removed HashMap from coin mod.rs

* updated Testnet to Regtest

* removed unnecessary variable

* updated as_str to &

* removed RawTx trait

* added newline

* changed test transaction to p2pkh

* updated test_send

* updated test_send

* updated test_send

* reformatted bitcoin processor

* moved sighash logic into signmachine

* removed generate_to_address

* added test_address function to bitcoin processor

* updated RpcResponse to enum and added Clone trait

* removed old RpcResponse

* updated shared_key to internal_key

* updated fee part

* updated test_send block logic

* added a test function for getting spendables

* updated tweaking keys logic

* updated calculate_weight logic

* added todo for BitcoinSchnorr Algorithm

* updated calculate_weight

* updated calculate_weight

* updated calculate_weight

* added a TODO for bitcoin's signing process

* removed unused code

* Finish merging develop

* cargo fmt

* cargo machete

* Handle most clippy lints on bitcoin

Doesn't handle the unused transcript due to pending cryptographic considerations.

* Rearrange imports and clippy tests

* Misc processor lint

* Update deny.toml

* Remove unnecessary RPC code

* updated test_send

* added bitcoin ci & updated test-dependencies yml

* fixed bitcoin ci

* updated bitcoin ci yml

* Remove mining from the bitcoin/monero docker files

The tests should control block production in order to test various
circumstances. The automatic mining disrupts assumptions made in testing. Since
we're now using the Bitcoin docker container for testing...

* Multiple fixes to the Bitcoin processor

Doesn't unwrap on RPC errors. Returns the expected connection error.

Fee calculation has a random - 1. This has been removed.

Supports the change address being an Option, as it is. This should not have
been blindly unwrapped.

* Remove unnecessary RPC code

* Further RPC simplifications

* Simplify Bitcoin action

It should not be mining.

* cargo fmt

* Finish RPC simplifications

* Run bitcoind as a daemon

* Remove the requirement on txindex

Saves tens of GB.

Also has attempt_send no longer return a list of outputs. That's incompatible
with this and only relevant to old scheduling designs.

* Remove number from Bitcoin SignableTransaction

Monero requires the current block number for decoy selection. Bitcoin doesn't
have a use.

* Ban coinbase transactions

These are burdened by maturity, so it's critically flawed to support them.

This causes the test_send function to fail as its working was premised on
a coinbase output. While it does make an actual output, it had insufficient
funds for the test's expectations due to regtest halving every 150 blocks.

In order to workaround this, the test will invalidate any existing chain,
offering a fresh start.

Also removes test_get_spendables and simplifies test_send.

* Various simplifications

Modifies SpendableOutput further to not require RPC calls at time of sign.

Removes the need to have get_transaction in the RPC.

* Clean prepare_send

* Update the Bitcoin TransactionMachine to output a Transaction

* Bitcoin TransactionMachine simplifications

* Update XOnly key handling

* Use a single sighash cache

* Move tweak_keys

* Remove unnecessary PSBT sets

* Restore removed newlines

* Other newlines

* Replace calculate_weight's custom math with a dummy TX serialize

* Move BTC TX construction code from processor to bitcoin

* Rename transactions.rs to wallet.rs

* Remove unused crate

* Note TODO

* Clean bitcoin signature test

* Make unit test out of BTC FROST signing test

* Final lint

* Remove usage of PartiallySignedTransaction

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-01-31 07:48:14 -05:00
Luke Parker
fba5b7fed4 Attempt workaround for #247 2023-01-31 07:07:27 -05:00
Luke Parker
affe4300e8 Replace clippy --tests with --all-targets 2023-01-30 07:30:02 -05:00
Luke Parker
b6f9a1f8b6 Rename file with dashes to having underscores 2023-01-30 04:27:39 -05:00
akildemir
9e01588b11 add test send to wallet-rpc with arb data (#246)
* add test send to wallet-rpc with arb data

* convert literals to const
2023-01-30 04:25:46 -05:00
Luke Parker
b0e0fc44cf Add secondary while loop to serai/client's test runner
There's a failing CI run on a node which booted, yet didn't create a genesis
yet. Apparently, the RPC is potentially accessible before the chain is in.

This attempts to resolve that.
2023-01-28 03:32:21 -05:00
Luke Parker
a4fdff3e3b Make progress on #235
I'm still not exactly sure where the trap handler in Monero for this is...
until then, this remains potentially fingerprintable.
2023-01-28 03:18:41 -05:00
Luke Parker
9241bdc3b5 Fix #183 2023-01-28 02:35:32 -05:00
Luke Parker
b253529413 cargo fmt 2023-01-28 01:59:42 -05:00
Luke Parker
2ace339975 Tokens pallet (#243)
* Use Monero-compatible additional TX keys

This still sends a fingerprinting flare up if you send to a subaddress which
needs to be fixed. Despite that, Monero no should no longer fail to scan TXs
from monero-serai regarding additional keys.

Previously it failed becuase we supplied one key as THE key, and n-1 as
additional. Monero expects n for additional.

This does correctly select when to use THE key versus when to use the additional
key when sending. That removes the ability for recipients to fingerprint
monero-serai by receiving to a standard address yet needing to use an additional
key.

* Add tokens_primitives

Moves OutInstruction from in-instructions.

Turns Destination into OutInstruction.

* Correct in-instructions DispatchClass

* Add initial tokens pallet

* Don't allow pallet addresses to equal identity

* Add support for InInstruction::transfer

Requires a cargo update due to modifications made to serai-dex/substrate.

Successfully mints a token to a SeraiAddress.

* Bind InInstructions to an amount

* Add a call filter to the runtime

Prevents worrying about calls to the assets pallet/generally tightens things
up.

* Restore Destination

It was meged into OutInstruction, yet it didn't make sense for OutInstruction
to contain a SeraiAddress.

Also deletes the excessively dated Scenarios doc.

* Split PublicKey/SeraiAddress

Lets us define a custom Display/ToString for SeraiAddress.

Also resolves an oddity where PublicKey would be encoded as String, not
[u8; 32].

* Test burning tokens/retrieving OutInstructions

Modularizes processor_coinUpdates into a shared testing utility.

* Misc lint

* Don't use PolkadotExtrinsicParams
2023-01-28 01:47:13 -05:00
akildemir
f12cc2cca6 Add more tests (#240)
* add wallet-rpc-compatibility tests

* fmt + clippy

* add wallet-rpc receive tests

* add (0,0) subaddress check to standard address
2023-01-24 15:22:07 -05:00
Luke Parker
19664967ed Use Monero-compatible additional TX keys
This still sends a fingerprinting flare up if you send to a subaddress which
needs to be fixed. Despite that, Monero no should no longer fail to scan TXs
from monero-serai regarding additional keys.

Previously it failed becuase we supplied one key as THE key, and n-1 as
additional. Monero expects n for additional.

This does correctly select when to use THE key versus when to use the additional
key when sending. That removes the ability for recipients to fingerprint
monero-serai by receiving to a standard address yet needing to use an additional
key.
2023-01-21 01:29:02 -05:00
Luke Parker
27f5881553 Ensure Amount uses checked ops 2023-01-20 11:04:21 -05:00
Luke Parker
8ca90e7905 Initial In Instructions pallet and Serai client lib (#233)
* Initial work on an In Inherents pallet

* Add an event for when a batch is executed

* Add a dummy provider for InInstructions

* Add in-instructions to the node

* Add the Serai runtime API to the processor

* Move processor tests around

* Build a subxt Client around Serai

* Successfully get Batch events from Serai

Renamed processor/substrate to processor/serai.

* Much more robust InInstruction pallet

* Implement the workaround from https://github.com/paritytech/subxt/issues/602

* Initial prototype of processor generated InInstructions

* Correct PendingCoins data flow for InInstructions

* Minor lint to in-instructions

* Remove the global Serai connection for a partial re-impl

* Correct ID handling of the processor test

* Workaround the delay in the subscription

* Make an unwrap an if let Some, remove old comments

* Lint the processor toml

* Rebase and update

* Move substrate/in-instructions to substrate/in-instructions/pallet

* Start an in-instructions primitives lib

* Properly update processor to subxt 0.24

Also corrects failures from the rebase.

* in-instructions cargo update

* Implement IsFatalError

* is_inherent -> true

* Rename in-instructions crates and misc cleanup

* Update documentation

* cargo update

* Misc update fixes

* Replace height with block_number

* Update processor src to latest subxt

* Correct pipeline for InInstructions testing

* Remove runtime::AccountId for serai_primitives::NativeAddress

* Rewrite the in-instructions pallet

Complete with respect to the currently written docs.

Drops the custom serializer for just using SCALE.

Makes slight tweaks as relevant.

* Move instructions' InherentDataProvider to a client crate

* Correct doc gen

* Add serde to in-instructions-primitives

* Add in-instructions-primitives to pallet

* Heights -> BlockNumbers

* Get batch pub test loop working

* Update in instructions pallet terminology

Removes the ambiguous Coin for Update.

Removes pending/artificial latency for furture client work.

Also moves to using serai_primitives::Coin.

* Add a BlockNumber primitive

* Belated cargo fmt

* Further document why DifferentBatch isn't fatal

* Correct processor sleeps

* Remove metadata at compile time, add test framework for Serai nodes

* Remove manual RPC client

* Simplify update test

* Improve re-exporting behavior of serai-runtime

It now re-exports all pallets underneath it.

* Add a function to get storage values to the Serai RPC

* Update substrate/ to latest substrate

* Create a dedicated crate for the Serai RPC

* Remove unused dependencies in substrate/

* Remove unused dependencies in coins/

Out of scope for this branch, just minor and path of least resistance.

* Use substrate/serai/client for the Serai RPC lib

It's a bit out of place, since these client folders are intended for the node to
access pallets and so on. This is for end-users to access Serai as a whole.

In that sense, it made more sense as a top level folder, yet that also felt
out of place.

* Move InInstructions test to serai-client for now

* Final cleanup

* Update deny.toml

* Cargo.lock update from merging develop

* Update nightly

Attempt to work around the current CI failure, which is a Rust ICE.

We previously didn't upgrade due to clippy 10134, yet that's been reverted.

* clippy

* clippy

* fmt

* NativeAddress -> SeraiAddress

* Sec fix on non-provided updates and doc fixes

* Add Serai as a Coin

Necessary in order to swap to Serai.

* Add a BlockHash type, used for batch IDs

* Remove origin from InInstruction

Makes InInstructionTarget. Adds RefundableInInstruction with origin.

* Document storage items in in-instructions

* Rename serai/client/tests/serai.rs to updates.rs

It only tested publishing updates and their successful acceptance.
2023-01-20 11:00:18 -05:00
377 changed files with 51276 additions and 9365 deletions

2
.gitattributes vendored
View File

@@ -1,3 +1,5 @@
# Auto detect text files and perform LF normalization
* text=auto
* text eol=lf
*.pdf binary

47
.github/actions/bitcoin/action.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: bitcoin-regtest
description: Spawns a regtest Bitcoin daemon
inputs:
version:
description: "Version to download and run"
required: false
default: 24.0.1
runs:
using: "composite"
steps:
- name: Bitcoin Daemon Cache
id: cache-bitcoind
uses: actions/cache@v3
with:
path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Bitcoin Daemon
if: steps.cache-bitcoind.outputs.cache-hit != 'true'
shell: bash
run: |
RUNNER_OS=linux
RUNNER_ARCH=x86_64
FILE=bitcoin-${{ inputs.version }}-$RUNNER_ARCH-$RUNNER_OS-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-${{ inputs.version }}/$FILE
mv $FILE bitcoin.tar.gz
- name: Extract the Bitcoin Daemon
shell: bash
run: |
tar xzvf bitcoin.tar.gz
cd bitcoin-${{ inputs.version }}
sudo mv bin/* /bin && sudo mv lib/* /lib
- name: Bitcoin Regtest Daemon
shell: bash
run: |
RPC_USER=serai
RPC_PASS=seraidex
bitcoind -txindex -regtest \
-rpcuser=$RPC_USER -rpcpassword=$RPC_PASS \
-rpcbind=127.0.0.1 -rpcbind=$(hostname) -rpcallowip=0.0.0.0/0 \
-daemon

View File

@@ -21,7 +21,7 @@ runs:
using: "composite"
steps:
- name: Install Protobuf
uses: arduino/setup-protoc@master
uses: arduino/setup-protoc@v2.0.0
with:
repo-token: ${{ inputs.github-token }}
@@ -37,14 +37,7 @@ runs:
with:
toolchain: ${{ inputs.rust-toolchain }}
components: ${{ inputs.rust-components }}
targets: wasm32-unknown-unknown, riscv32imac-unknown-none-elf
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install WASM toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ steps.nightly.outputs.version }}
targets: wasm32-unknown-unknown
# - name: Cache Rust
# uses: Swatinem/rust-cache@v2

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.1.2
default: v0.18.2.0
runs:
using: "composite"
@@ -37,7 +37,7 @@ runs:
wget https://downloads.getmonero.org/cli/$FILE
tar -xvf $FILE
mv monero-x86_64-linux-gnu-${{ inputs.version }}/monero-wallet-rpc monero-wallet-rpc
mv monero-x86_64-linux-gnu-${{ inputs.version }}/monero-wallet-rpc monero-wallet-rpc
- name: Monero Wallet RPC
shell: bash

View File

@@ -5,7 +5,7 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.1.2
default: v0.18.2.0
runs:
using: "composite"

View File

@@ -10,7 +10,12 @@ inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.0.0
default: v0.18.2.0
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"
required: false
default: 24.0.1
runs:
using: "composite"
@@ -30,5 +35,10 @@ runs:
with:
version: ${{ inputs.monero-version }}
- name: Run a Bitcoin Regtest Node
uses: ./.github/actions/bitcoin
with:
version: ${{ inputs.bitcoin-version }}
- name: Run a Monero Wallet-RPC
uses: ./.github/actions/monero-wallet-rpc

View File

@@ -1 +1 @@
nightly-2022-12-01
nightly-2023-07-01

View File

@@ -6,10 +6,12 @@ on:
- develop
paths:
- "coins/monero/**"
- "processor/**"
pull_request:
paths:
- "coins/monero/**"
- "processor/**"
jobs:
# Only run these once since they will be consistent regardless of any node
@@ -33,7 +35,7 @@ jobs:
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.1.2]
version: [v0.17.3.2, v0.18.2.0]
steps:
- uses: actions/checkout@v3
@@ -45,12 +47,13 @@ jobs:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
# Runs with the binaries feature so the binaries build
# https://github.com/rust-lang/cargo/issues/8396
run: cargo test --package monero-serai --test '*'
run: cargo test --package monero-serai --features binaries --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.1.2' }}
if: ${{ matrix.version != 'v0.18.2.0' }}
run: |
cargo test --package monero-serai --all-features --test '*'
cargo test --package serai-processor monero
cargo test --package serai-processor --all-features monero

21
.github/workflows/no-std.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: no-std build
on:
push:
branches:
- develop
pull_request:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Verify no-std builds
run: cd tests/no-std && cargo build --target riscv32imac-unknown-none-elf

View File

@@ -21,12 +21,12 @@ jobs:
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
# Clippy requires nightly due to serai-runtime requiring it
rust-toolchain: ${{ steps.nightly.outputs.version }}
rust-components: clippy
- name: Run Clippy
run: cargo clippy --all-features --tests -- -D warnings -A dead_code
# Allow dbg_macro when run locally, yet not when pushed
run: cargo clippy --all-features --all-targets -- -D clippy::dbg_macro $(grep "\S" ../../clippy-config | grep -v "#")
deny:
runs-on: ubuntu-latest
@@ -58,8 +58,13 @@ jobs:
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Build node
run: |
cd substrate/node
cargo build
- name: Run Tests
run: cargo test --all-features
run: GITHUB_CI=true cargo test --all-features
fmt:
runs-on: ubuntu-latest

5582
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,8 @@
[workspace]
members = [
"common/std-shims",
"common/zalloc",
"common/db",
"crypto/transcript",
@@ -15,29 +17,42 @@ members = [
"crypto/dleq",
"crypto/dkg",
"crypto/frost",
"crypto/schnorrkel",
"coins/ethereum",
"coins/monero/generators",
"coins/monero",
"message-queue",
"processor/messages",
"processor",
"substrate/serai/primitives",
"coordinator/tributary/tendermint",
"coordinator/tributary",
"coordinator",
"substrate/primitives",
"substrate/tokens/primitives",
"substrate/tokens/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/tendermint/machine",
"substrate/tendermint/primitives",
"substrate/tendermint/client",
"substrate/tendermint/pallet",
"substrate/runtime",
"substrate/node",
"substrate/client",
"tests/no-std",
]
# Always compile Monero (and a variety of dependencies) with optimizations due
# to the unoptimized performance of Bulletproofs
# to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }

View File

@@ -9,6 +9,8 @@ wallet.
### Layout
- `audits`: Audits for various parts of Serai.
- `docs`: Documentation on the Serai protocol.
- `common`: Crates containing utilities common to a variety of areas under
@@ -26,6 +28,9 @@ wallet.
- `processor`: A generic chain processor to process data for Serai and process
events from Serai, executing transactions as expected and needed.
- `coordinator`: A service to manage processors and communicate over a P2P
network with other validators.
- `substrate`: Substrate crates used to instantiate the Serai network.
- `deploy`: Scripts to deploy a Serai node/test environment.

Binary file not shown.

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Cypher Stack
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,7 @@
# Cypher Stack /crypto Audit, March 2023
This audit was over the /crypto folder, excluding the ed448 crate, the `Ed448`
ciphersuite in the ciphersuite crate, and the `dleq/experimental` feature. It is
encompassing up to commit 669d2dbffc1dafb82a09d9419ea182667115df06.
Please see https://github.com/cypherstack/serai-audit for provenance.

51
clippy-config Normal file
View File

@@ -0,0 +1,51 @@
# No warnings allowed
-D warnings
# nursery
-D clippy::nursery
# Erratic and unhelpful
-A clippy::missing_const_for_fn
# Too many false/irrelevant positives
-A clippy::redundant_pub_crate
# Flags on any debug_assert using an RNG
-A clippy::debug_assert_with_mut_call
# Stylistic preference
-A clippy::option_if_let_else
# pedantic
-D clippy::unnecessary_wraps
-D clippy::unused_async
-D clippy::unused_self
# restrictions
# Safety
-D clippy::as_conversions
-D clippy::disallowed_script_idents
-D clippy::wildcard_enum_match_arm
# Clarity
-D clippy::assertions_on_result_states
-D clippy::deref_by_slicing
-D clippy::empty_structs_with_brackets
-D clippy::get_unwrap
-D clippy::rest_pat_in_fully_bound_structs
-D clippy::semicolon_inside_block
-D clippy::tests_outside_test_module
# Quality
-D clippy::format_push_string
-D clippy::string_to_string
# These potentially should be enabled in the future
# -D clippy::missing_errors_doc
# -D clippy::missing_panics_doc
# -D clippy::doc_markdown
# TODO: Enable this
# -D clippy::cargo
# Not in nightly yet
# -D clippy::redundant_type_annotations
# -D clippy::big_endian_bytes
# -D clippy::host_endian_bytes

37
coins/bitcoin/Cargo.toml Normal file
View File

@@ -0,0 +1,37 @@
[package]
name = "bitcoin-serai"
version = "0.2.0"
description = "A Bitcoin library for FROST-signing transactions"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/bitcoin"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Vrx <vrx00@proton.me>"]
edition = "2021"
[dependencies]
lazy_static = "1"
thiserror = "1"
zeroize = "^1.5"
rand_core = "0.6"
sha2 = "0.10"
secp256k1 = { version = "0.27", features = ["global-context"] }
bitcoin = { version = "0.30", features = ["serde"] }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic", "bits"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", features = ["recommended"] }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["secp256k1"] }
hex = "0.4"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
reqwest = { version = "0.11", features = ["json"] }
[dev-dependencies]
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["tests"] }
tokio = { version = "1", features = ["full"] }
[features]
hazmat = []

4
coins/bitcoin/README.md Normal file
View File

@@ -0,0 +1,4 @@
# bitcoin-serai
An application of [modular-frost](https://docs.rs/modular-frost) to Bitcoin
transactions, enabling extremely-efficient multisigs.

160
coins/bitcoin/src/crypto.rs Normal file
View File

@@ -0,0 +1,160 @@
use core::fmt::Debug;
use std::io;
use lazy_static::lazy_static;
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use sha2::{Digest, Sha256};
use transcript::Transcript;
use secp256k1::schnorr::Signature;
use k256::{
elliptic_curve::{
ops::Reduce,
sec1::{Tag, ToEncodedPoint},
},
U256, Scalar, ProjectivePoint,
};
use frost::{
curve::{Ciphersuite, Secp256k1},
Participant, ThresholdKeys, ThresholdView, FrostError,
algorithm::{Hram as HramTrait, Algorithm, Schnorr as FrostSchnorr},
};
use bitcoin::key::XOnlyPublicKey;
/// Get the x coordinate of a non-infinity, even point. Panics on invalid input.
pub fn x(key: &ProjectivePoint) -> [u8; 32] {
let encoded = key.to_encoded_point(true);
assert_eq!(encoded.tag(), Tag::CompressedEvenY, "x coordinate of odd key");
(*encoded.x().expect("point at infinity")).into()
}
/// Convert a non-infinite even point to a XOnlyPublicKey. Panics on invalid input.
pub fn x_only(key: &ProjectivePoint) -> XOnlyPublicKey {
XOnlyPublicKey::from_slice(&x(key)).unwrap()
}
/// Make a point even by adding the generator until it is even. Returns the even point and the
/// amount of additions required.
pub fn make_even(mut key: ProjectivePoint) -> (ProjectivePoint, u64) {
let mut c = 0;
while key.to_encoded_point(true).tag() == Tag::CompressedOddY {
key += ProjectivePoint::GENERATOR;
c += 1;
}
(key, c)
}
/// A BIP-340 compatible HRAm for use with the modular-frost Schnorr Algorithm.
///
/// If passed an odd nonce, it will have the generator added until it is even.
#[derive(Clone, Copy, Debug)]
pub struct Hram {}
lazy_static! {
static ref TAG_HASH: [u8; 32] = Sha256::digest(b"BIP0340/challenge").into();
}
#[allow(non_snake_case)]
impl HramTrait<Secp256k1> for Hram {
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
// Convert the nonce to be even
let (R, _) = make_even(*R);
let mut data = Sha256::new();
data.update(*TAG_HASH);
data.update(*TAG_HASH);
data.update(x(&R));
data.update(x(A));
data.update(m);
Scalar::reduce(U256::from_be_slice(&data.finalize()))
}
}
/// BIP-340 Schnorr signature algorithm.
///
/// This must be used with a ThresholdKeys whose group key is even. If it is odd, this will panic.
#[derive(Clone)]
pub struct Schnorr<T: Sync + Clone + Debug + Transcript>(FrostSchnorr<Secp256k1, T, Hram>);
impl<T: Sync + Clone + Debug + Transcript> Schnorr<T> {
/// Construct a Schnorr algorithm continuing the specified transcript.
pub fn new(transcript: T) -> Schnorr<T> {
Schnorr(FrostSchnorr::new(transcript))
}
}
impl<T: Sync + Clone + Debug + Transcript> Algorithm<Secp256k1> for Schnorr<T> {
type Transcript = T;
type Addendum = ();
type Signature = Signature;
fn transcript(&mut self) -> &mut Self::Transcript {
self.0.transcript()
}
fn nonces(&self) -> Vec<Vec<ProjectivePoint>> {
self.0.nonces()
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Secp256k1>,
) {
self.0.preprocess_addendum(rng, keys)
}
fn read_addendum<R: io::Read>(&self, reader: &mut R) -> io::Result<Self::Addendum> {
self.0.read_addendum(reader)
}
fn process_addendum(
&mut self,
view: &ThresholdView<Secp256k1>,
i: Participant,
addendum: (),
) -> Result<(), FrostError> {
self.0.process_addendum(view, i, addendum)
}
fn sign_share(
&mut self,
params: &ThresholdView<Secp256k1>,
nonce_sums: &[Vec<<Secp256k1 as Ciphersuite>::G>],
nonces: Vec<Zeroizing<<Secp256k1 as Ciphersuite>::F>>,
msg: &[u8],
) -> <Secp256k1 as Ciphersuite>::F {
self.0.sign_share(params, nonce_sums, nonces, msg)
}
#[must_use]
fn verify(
&self,
group_key: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
sum: Scalar,
) -> Option<Self::Signature> {
self.0.verify(group_key, nonces, sum).map(|mut sig| {
// Make the R of the final signature even
let offset;
(sig.R, offset) = make_even(sig.R);
// s = r + cx. Since we added to the r, add to s
sig.s += Scalar::from(offset);
// Convert to a secp256k1 signature
Signature::from_slice(&sig.serialize()[1 ..]).unwrap()
})
}
fn verify_share(
&self,
verification_share: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
share: Scalar,
) -> Result<Vec<(Scalar, ProjectivePoint)>, ()> {
self.0.verify_share(verification_share, nonces, share)
}
}

19
coins/bitcoin/src/lib.rs Normal file
View File

@@ -0,0 +1,19 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
/// The bitcoin Rust library.
pub use bitcoin;
/// Cryptographic helpers.
#[cfg(feature = "hazmat")]
pub mod crypto;
#[cfg(not(feature = "hazmat"))]
pub(crate) mod crypto;
/// Wallet functionality to create transactions.
pub mod wallet;
/// A minimal asynchronous Bitcoin RPC client.
pub mod rpc;
#[cfg(test)]
mod tests;

145
coins/bitcoin/src/rpc.rs Normal file
View File

@@ -0,0 +1,145 @@
use core::fmt::Debug;
use thiserror::Error;
use serde::{Deserialize, de::DeserializeOwned};
use serde_json::json;
use bitcoin::{
hashes::{Hash, hex::FromHex},
consensus::encode,
Txid, Transaction, BlockHash, Block,
};
#[derive(Clone, PartialEq, Eq, Debug, Deserialize)]
pub struct Error {
code: isize,
message: String,
}
#[derive(Clone, Debug, Deserialize)]
#[serde(untagged)]
enum RpcResponse<T> {
Ok { result: T },
Err { error: Error },
}
/// A minimal asynchronous Bitcoin RPC client.
#[derive(Clone, Debug)]
pub struct Rpc(String);
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum RpcError {
#[error("couldn't connect to node")]
ConnectionError,
#[error("request had an error: {0:?}")]
RequestError(Error),
#[error("node sent an invalid response")]
InvalidResponse,
}
impl Rpc {
pub async fn new(url: String) -> Result<Rpc, RpcError> {
let rpc = Rpc(url);
// Make an RPC request to verify the node is reachable and sane
rpc.get_latest_block_number().await?;
Ok(rpc)
}
/// Perform an arbitrary RPC call.
pub async fn rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: serde_json::Value,
) -> Result<Response, RpcError> {
let client = reqwest::Client::new();
let res = client
.post(&self.0)
.json(&json!({ "jsonrpc": "2.0", "method": method, "params": params }))
.send()
.await
.map_err(|_| RpcError::ConnectionError)?
.text()
.await
.map_err(|_| RpcError::ConnectionError)?;
let res: RpcResponse<Response> =
serde_json::from_str(&res).map_err(|_| RpcError::InvalidResponse)?;
match res {
RpcResponse::Ok { result } => Ok(result),
RpcResponse::Err { error } => Err(RpcError::RequestError(error)),
}
}
/// Get the latest block's number.
///
/// The genesis block's 'number' is zero. They increment from there.
pub async fn get_latest_block_number(&self) -> Result<usize, RpcError> {
// getblockcount doesn't return the amount of blocks on the current chain, yet the "height"
// of the current chain. The "height" of the current chain is defined as the "height" of the
// tip block of the current chain. The "height" of a block is defined as the amount of blocks
// present when the block was created. Accordingly, the genesis block has height 0, and
// getblockcount will return 0 when it's only the only block, despite their being one block.
self.rpc_call("getblockcount", json!([])).await
}
/// Get the hash of a block by the block's number.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
let mut hash = *self
.rpc_call::<BlockHash>("getblockhash", json!([number]))
.await?
.as_raw_hash()
.as_byte_array();
// bitcoin stores the inner bytes in reverse order.
hash.reverse();
Ok(hash)
}
/// Get a block's number by its hash.
pub async fn get_block_number(&self, hash: &[u8; 32]) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct Number {
height: usize,
}
Ok(self.rpc_call::<Number>("getblockheader", json!([hex::encode(hash)])).await?.height)
}
/// Get a block by its hash.
pub async fn get_block(&self, hash: &[u8; 32]) -> Result<Block, RpcError> {
let hex = self.rpc_call::<String>("getblock", json!([hex::encode(hash), 0])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex).map_err(|_| RpcError::InvalidResponse)?;
let block: Block = encode::deserialize(&bytes).map_err(|_| RpcError::InvalidResponse)?;
let mut block_hash = *block.block_hash().as_raw_hash().as_byte_array();
block_hash.reverse();
if hash != &block_hash {
Err(RpcError::InvalidResponse)?;
}
Ok(block)
}
/// Publish a transaction.
pub async fn send_raw_transaction(&self, tx: &Transaction) -> Result<Txid, RpcError> {
let txid = self.rpc_call("sendrawtransaction", json!([encode::serialize_hex(tx)])).await?;
if txid != tx.txid() {
Err(RpcError::InvalidResponse)?;
}
Ok(txid)
}
/// Get a transaction by its hash.
pub async fn get_transaction(&self, hash: &[u8; 32]) -> Result<Transaction, RpcError> {
let hex = self.rpc_call::<String>("getrawtransaction", json!([hex::encode(hash)])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex).map_err(|_| RpcError::InvalidResponse)?;
let tx: Transaction = encode::deserialize(&bytes).map_err(|_| RpcError::InvalidResponse)?;
let mut tx_hash = *tx.txid().as_raw_hash().as_byte_array();
tx_hash.reverse();
if hash != &tx_hash {
Err(RpcError::InvalidResponse)?;
}
Ok(tx)
}
}

View File

@@ -0,0 +1,47 @@
use rand_core::OsRng;
use sha2::{Digest, Sha256};
use secp256k1::{SECP256K1, Message};
use k256::Scalar;
use transcript::{Transcript, RecommendedTranscript};
use frost::{
curve::Secp256k1,
Participant,
tests::{algorithm_machines, key_gen, sign},
};
use crate::{
bitcoin::hashes::{Hash as HashTrait, sha256::Hash},
crypto::{x_only, make_even, Schnorr},
};
#[test]
fn test_algorithm() {
let mut keys = key_gen::<_, Secp256k1>(&mut OsRng);
const MESSAGE: &[u8] = b"Hello, World!";
for (_, keys) in keys.iter_mut() {
let (_, offset) = make_even(keys.group_key());
*keys = keys.offset(Scalar::from(offset));
}
let algo =
Schnorr::<RecommendedTranscript>::new(RecommendedTranscript::new(b"bitcoin-serai sign test"));
let sig = sign(
&mut OsRng,
algo.clone(),
keys.clone(),
algorithm_machines(&mut OsRng, algo, &keys),
&Sha256::digest(MESSAGE),
);
SECP256K1
.verify_schnorr(
&sig,
&Message::from(Hash::hash(MESSAGE)),
&x_only(&keys[&Participant::new(1).unwrap()].group_key()),
)
.unwrap()
}

View File

@@ -0,0 +1 @@
mod crypto;

View File

@@ -0,0 +1,160 @@
use std::{
io::{self, Read, Write},
collections::HashMap,
};
use k256::{
elliptic_curve::sec1::{Tag, ToEncodedPoint},
Scalar, ProjectivePoint,
};
use frost::{
curve::{Ciphersuite, Secp256k1},
ThresholdKeys,
};
use bitcoin::{
consensus::encode::{Decodable, serialize},
key::TweakedPublicKey,
OutPoint, ScriptBuf, TxOut, Transaction, Block, Network, Address,
};
use crate::crypto::{x_only, make_even};
mod send;
pub use send::*;
/// Tweak keys to ensure they're usable with Bitcoin.
pub fn tweak_keys(keys: &ThresholdKeys<Secp256k1>) -> ThresholdKeys<Secp256k1> {
let (_, offset) = make_even(keys.group_key());
keys.offset(Scalar::from(offset))
}
/// Return the Taproot address for a public key.
pub fn address(network: Network, key: ProjectivePoint) -> Option<Address> {
if key.to_encoded_point(true).tag() != Tag::CompressedEvenY {
return None;
}
Some(Address::p2tr_tweaked(TweakedPublicKey::dangerous_assume_tweaked(x_only(&key)), network))
}
/// A spendable output.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct ReceivedOutput {
// The scalar offset to obtain the key usable to spend this output.
offset: Scalar,
// The output to spend.
output: TxOut,
// The TX ID and vout of the output to spend.
outpoint: OutPoint,
}
impl ReceivedOutput {
/// The offset for this output.
pub fn offset(&self) -> Scalar {
self.offset
}
/// The outpoint for this output.
pub fn outpoint(&self) -> &OutPoint {
&self.outpoint
}
/// The value of this output.
pub fn value(&self) -> u64 {
self.output.value
}
/// Read a ReceivedOutput from a generic satisfying Read.
pub fn read<R: Read>(r: &mut R) -> io::Result<ReceivedOutput> {
Ok(ReceivedOutput {
offset: Secp256k1::read_F(r)?,
output: TxOut::consensus_decode(r)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid TxOut"))?,
outpoint: OutPoint::consensus_decode(r)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "invalid OutPoint"))?,
})
}
/// Write a ReceivedOutput to a generic satisfying Write.
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.offset.to_bytes())?;
w.write_all(&serialize(&self.output))?;
w.write_all(&serialize(&self.outpoint))
}
/// Serialize a ReceivedOutput to a Vec<u8>.
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
}
/// A transaction scanner capable of being used with HDKD schemes.
#[derive(Clone, Debug)]
pub struct Scanner {
key: ProjectivePoint,
scripts: HashMap<ScriptBuf, Scalar>,
}
impl Scanner {
/// Construct a Scanner for a key.
///
/// Returns None if this key can't be scanned for.
pub fn new(key: ProjectivePoint) -> Option<Scanner> {
let mut scripts = HashMap::new();
// Uses Network::Bitcoin since network is irrelevant here
scripts.insert(address(Network::Bitcoin, key)?.script_pubkey(), Scalar::ZERO);
Some(Scanner { key, scripts })
}
/// Register an offset to scan for.
///
/// Due to Bitcoin's requirement that points are even, not every offset may be used.
/// If an offset isn't usable, it will be incremented until it is. If this offset is already
/// present, None is returned. Else, Some(offset) will be, with the used offset.
pub fn register_offset(&mut self, mut offset: Scalar) -> Option<Scalar> {
loop {
match address(Network::Bitcoin, self.key + (ProjectivePoint::GENERATOR * offset)) {
Some(address) => {
let script = address.script_pubkey();
if self.scripts.contains_key(&script) {
None?;
}
self.scripts.insert(script, offset);
return Some(offset);
}
None => offset += Scalar::ONE,
}
}
}
/// Scan a transaction.
pub fn scan_transaction(&self, tx: &Transaction) -> Vec<ReceivedOutput> {
let mut res = vec![];
for (vout, output) in tx.output.iter().enumerate() {
if let Some(offset) = self.scripts.get(&output.script_pubkey) {
res.push(ReceivedOutput {
offset: *offset,
output: output.clone(),
outpoint: OutPoint::new(tx.txid(), u32::try_from(vout).unwrap()),
});
}
}
res
}
/// Scan a block.
///
/// This will also scan the coinbase transaction which is bound by maturity. If received outputs
/// must be immediately spendable, a post-processing pass is needed to remove those outputs.
/// Alternatively, scan_transaction can be called on `block.txdata[1 ..]`.
pub fn scan_block(&self, block: &Block) -> Vec<ReceivedOutput> {
let mut res = vec![];
for tx in &block.txdata {
res.extend(self.scan_transaction(tx));
}
res
}
}

View File

@@ -0,0 +1,382 @@
use std::{
io::{self, Read},
collections::HashMap,
};
use thiserror::Error;
use rand_core::{RngCore, CryptoRng};
use transcript::{Transcript, RecommendedTranscript};
use k256::{elliptic_curve::sec1::ToEncodedPoint, Scalar};
use frost::{curve::Secp256k1, Participant, ThresholdKeys, FrostError, sign::*};
use bitcoin::{
sighash::{TapSighashType, SighashCache, Prevouts},
absolute::LockTime,
script::{PushBytesBuf, ScriptBuf},
OutPoint, Sequence, Witness, TxIn, TxOut, Transaction, Network, Address,
};
use crate::{
crypto::Schnorr,
wallet::{address, ReceivedOutput},
};
#[rustfmt::skip]
// https://github.com/bitcoin/bitcoin/blob/306ccd4927a2efe325c8d84be1bdb79edeb29b04/src/policy/policy.h#L27
const MAX_STANDARD_TX_WEIGHT: u64 = 400_000;
#[rustfmt::skip]
//https://github.com/bitcoin/bitcoin/blob/a245429d680eb95cf4c0c78e58e63e3f0f5d979a/src/test/transaction_tests.cpp#L815-L816
const DUST: u64 = 674;
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum TransactionError {
#[error("no inputs were specified")]
NoInputs,
#[error("no outputs were created")]
NoOutputs,
#[error("a specified payment's amount was less than bitcoin's required minimum")]
DustPayment,
#[error("too much data was specified")]
TooMuchData,
#[error("not enough funds for these payments")]
NotEnoughFunds,
#[error("transaction was too large")]
TooLargeTransaction,
}
/// A signable transaction, clone-able across attempts.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct SignableTransaction {
tx: Transaction,
offsets: Vec<Scalar>,
prevouts: Vec<TxOut>,
needed_fee: u64,
}
impl SignableTransaction {
fn calculate_weight(inputs: usize, payments: &[(Address, u64)], change: Option<&Address>) -> u64 {
// Expand this a full transaction in order to use the bitcoin library's weight function
let mut tx = Transaction {
version: 2,
lock_time: LockTime::ZERO,
input: vec![
TxIn {
// This is a fixed size
// See https://developer.bitcoin.org/reference/transactions.html#raw-transaction-format
previous_output: OutPoint::default(),
// This is empty for a Taproot spend
script_sig: ScriptBuf::new(),
// This is fixed size, yet we do use Sequence::MAX
sequence: Sequence::MAX,
// Our witnesses contains a single 64-byte signature
witness: Witness::from_slice(&[vec![0; 64]])
};
inputs
],
output: payments
.iter()
// The payment is a fixed size so we don't have to use it here
// The script pub key is not of a fixed size and does have to be used here
.map(|payment| TxOut { value: payment.1, script_pubkey: payment.0.script_pubkey() })
.collect(),
};
if let Some(change) = change {
// Use a 0 value since we're currently unsure what the change amount will be, and since
// the value is fixed size (so any value could be used here)
tx.output.push(TxOut { value: 0, script_pubkey: change.script_pubkey() });
}
u64::try_from(tx.weight()).unwrap()
}
/// Returns the fee necessary for this transaction to achieve the fee rate specified at
/// construction.
///
/// The actual fee this transaction will use is `sum(inputs) - sum(outputs)`.
pub fn needed_fee(&self) -> u64 {
self.needed_fee
}
/// Create a new SignableTransaction.
///
/// If a change address is specified, any leftover funds will be sent to it if the leftover funds
/// exceed the minimum output amount. If a change address isn't specified, all leftover funds
/// will become part of the paid fee.
///
/// If data is specified, an OP_RETURN output will be added with it.
pub fn new(
mut inputs: Vec<ReceivedOutput>,
payments: &[(Address, u64)],
change: Option<Address>,
data: Option<Vec<u8>>,
fee_per_weight: u64,
) -> Result<SignableTransaction, TransactionError> {
if inputs.is_empty() {
Err(TransactionError::NoInputs)?;
}
if payments.is_empty() && change.is_none() && data.is_none() {
Err(TransactionError::NoOutputs)?;
}
for (_, amount) in payments {
if *amount < DUST {
Err(TransactionError::DustPayment)?;
}
}
if data.as_ref().map(|data| data.len()).unwrap_or(0) > 80 {
Err(TransactionError::TooMuchData)?;
}
let input_sat = inputs.iter().map(|input| input.output.value).sum::<u64>();
let offsets = inputs.iter().map(|input| input.offset).collect();
let tx_ins = inputs
.iter()
.map(|input| TxIn {
previous_output: input.outpoint,
script_sig: ScriptBuf::new(),
sequence: Sequence::MAX,
witness: Witness::new(),
})
.collect::<Vec<_>>();
let payment_sat = payments.iter().map(|payment| payment.1).sum::<u64>();
let mut tx_outs = payments
.iter()
.map(|payment| TxOut { value: payment.1, script_pubkey: payment.0.script_pubkey() })
.collect::<Vec<_>>();
// Add the OP_RETURN output
if let Some(data) = data {
tx_outs.push(TxOut {
value: 0,
script_pubkey: ScriptBuf::new_op_return(
&PushBytesBuf::try_from(data)
.expect("data didn't fit into PushBytes depsite being checked"),
),
})
}
let mut weight = Self::calculate_weight(tx_ins.len(), payments, None);
let mut needed_fee = fee_per_weight * weight;
if input_sat < (payment_sat + needed_fee) {
Err(TransactionError::NotEnoughFunds)?;
}
// If there's a change address, check if there's change to give it
if let Some(change) = change.as_ref() {
let weight_with_change = Self::calculate_weight(tx_ins.len(), payments, Some(change));
let fee_with_change = fee_per_weight * weight_with_change;
if let Some(value) = input_sat.checked_sub(payment_sat + fee_with_change) {
if value >= DUST {
tx_outs.push(TxOut { value, script_pubkey: change.script_pubkey() });
weight = weight_with_change;
needed_fee = fee_with_change;
}
}
}
if tx_outs.is_empty() {
Err(TransactionError::NoOutputs)?;
}
if weight > MAX_STANDARD_TX_WEIGHT {
Err(TransactionError::TooLargeTransaction)?;
}
Ok(SignableTransaction {
tx: Transaction { version: 2, lock_time: LockTime::ZERO, input: tx_ins, output: tx_outs },
offsets,
prevouts: inputs.drain(..).map(|input| input.output).collect(),
needed_fee,
})
}
/// Create a multisig machine for this transaction.
///
/// Returns None if the wrong keys are used.
pub fn multisig(
self,
keys: ThresholdKeys<Secp256k1>,
mut transcript: RecommendedTranscript,
) -> Option<TransactionMachine> {
transcript.domain_separate(b"bitcoin_transaction");
transcript.append_message(b"root_key", keys.group_key().to_encoded_point(true).as_bytes());
// Transcript the inputs and outputs
let tx = &self.tx;
for input in &tx.input {
transcript.append_message(b"input_hash", input.previous_output.txid);
transcript.append_message(b"input_output_index", input.previous_output.vout.to_le_bytes());
}
for payment in &tx.output {
transcript.append_message(b"output_script", payment.script_pubkey.as_bytes());
transcript.append_message(b"output_amount", payment.value.to_le_bytes());
}
let mut sigs = vec![];
for i in 0 .. tx.input.len() {
let mut transcript = transcript.clone();
transcript.append_message(b"signing_input", u32::try_from(i).unwrap().to_le_bytes());
let offset = keys.clone().offset(self.offsets[i]);
if address(Network::Bitcoin, offset.group_key())?.script_pubkey() !=
self.prevouts[i].script_pubkey
{
None?;
}
sigs.push(AlgorithmMachine::new(
Schnorr::new(transcript),
keys.clone().offset(self.offsets[i]),
));
}
Some(TransactionMachine { tx: self, sigs })
}
}
/// A FROST signing machine to produce a Bitcoin transaction.
///
/// This does not support caching its preprocess. When sign is called, the message must be empty.
/// This will panic if it isn't.
pub struct TransactionMachine {
tx: SignableTransaction,
sigs: Vec<AlgorithmMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl PreprocessMachine for TransactionMachine {
type Preprocess = Vec<Preprocess<Secp256k1, ()>>;
type Signature = Transaction;
type SignMachine = TransactionSignMachine;
fn preprocess<R: RngCore + CryptoRng>(
mut self,
rng: &mut R,
) -> (Self::SignMachine, Self::Preprocess) {
let mut preprocesses = Vec::with_capacity(self.sigs.len());
let sigs = self
.sigs
.drain(..)
.map(|sig| {
let (sig, preprocess) = sig.preprocess(rng);
preprocesses.push(preprocess);
sig
})
.collect();
(TransactionSignMachine { tx: self.tx, sigs }, preprocesses)
}
}
pub struct TransactionSignMachine {
tx: SignableTransaction,
sigs: Vec<AlgorithmSignMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl SignMachine<Transaction> for TransactionSignMachine {
type Params = ();
type Keys = ThresholdKeys<Secp256k1>;
type Preprocess = Vec<Preprocess<Secp256k1, ()>>;
type SignatureShare = Vec<SignatureShare<Secp256k1>>;
type SignatureMachine = TransactionSignatureMachine;
fn cache(self) -> CachedPreprocess {
unimplemented!(
"Bitcoin transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn from_cache(
_: (),
_: ThresholdKeys<Secp256k1>,
_: CachedPreprocess,
) -> Result<Self, FrostError> {
unimplemented!(
"Bitcoin transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn read_preprocess<R: Read>(&self, reader: &mut R) -> io::Result<Self::Preprocess> {
self.sigs.iter().map(|sig| sig.read_preprocess(reader)).collect()
}
fn sign(
mut self,
commitments: HashMap<Participant, Self::Preprocess>,
msg: &[u8],
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
if !msg.is_empty() {
panic!("message was passed to the TransactionMachine when it generates its own");
}
let commitments = (0 .. self.sigs.len())
.map(|c| {
commitments
.iter()
.map(|(l, commitments)| (*l, commitments[c].clone()))
.collect::<HashMap<_, _>>()
})
.collect::<Vec<_>>();
let mut cache = SighashCache::new(&self.tx.tx);
// Sign committing to all inputs
let prevouts = Prevouts::All(&self.tx.prevouts);
let mut shares = Vec::with_capacity(self.sigs.len());
let sigs = self
.sigs
.drain(..)
.enumerate()
.map(|(i, sig)| {
let (sig, share) = sig.sign(
commitments[i].clone(),
cache
.taproot_key_spend_signature_hash(i, &prevouts, TapSighashType::Default)
.unwrap()
.as_ref(),
)?;
shares.push(share);
Ok(sig)
})
.collect::<Result<_, _>>()?;
Ok((TransactionSignatureMachine { tx: self.tx.tx, sigs }, shares))
}
}
pub struct TransactionSignatureMachine {
tx: Transaction,
sigs: Vec<AlgorithmSignatureMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl SignatureMachine<Transaction> for TransactionSignatureMachine {
type SignatureShare = Vec<SignatureShare<Secp256k1>>;
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare> {
self.sigs.iter().map(|sig| sig.read_share(reader)).collect()
}
fn complete(
mut self,
mut shares: HashMap<Participant, Self::SignatureShare>,
) -> Result<Transaction, FrostError> {
for (input, schnorr) in self.tx.input.iter_mut().zip(self.sigs.drain(..)) {
let sig = schnorr.complete(
shares.iter_mut().map(|(l, shares)| (*l, shares.remove(0))).collect::<HashMap<_, _>>(),
)?;
let mut witness = Witness::new();
witness.push(sig.as_ref());
input.witness = witness;
}
Ok(self.tx)
}
}

View File

@@ -0,0 +1,25 @@
use bitcoin_serai::{bitcoin::hashes::Hash as HashTrait, rpc::RpcError};
mod runner;
use runner::rpc;
async_sequential! {
async fn test_rpc() {
let rpc = rpc().await;
// Test get_latest_block_number and get_block_hash by round tripping them
let latest = rpc.get_latest_block_number().await.unwrap();
let hash = rpc.get_block_hash(latest).await.unwrap();
assert_eq!(rpc.get_block_number(&hash).await.unwrap(), latest);
// Test this actually is the latest block number by checking asking for the next block's errors
assert!(matches!(rpc.get_block_hash(latest + 1).await, Err(RpcError::RequestError(_))));
// Test get_block by checking the received block's hash matches the request
let block = rpc.get_block(&hash).await.unwrap();
// Hashes are stored in reverse. It's bs from Satoshi
let mut block_hash = *block.block_hash().as_raw_hash().as_byte_array();
block_hash.reverse();
assert_eq!(hash, block_hash);
}
}

View File

@@ -0,0 +1,44 @@
use bitcoin_serai::rpc::Rpc;
use tokio::sync::Mutex;
lazy_static::lazy_static! {
pub static ref SEQUENTIAL: Mutex<()> = Mutex::new(());
}
#[allow(dead_code)]
pub(crate) async fn rpc() -> Rpc {
let rpc = Rpc::new("http://serai:seraidex@127.0.0.1:18443".to_string()).await.unwrap();
// If this node has already been interacted with, clear its chain
if rpc.get_latest_block_number().await.unwrap() > 0 {
rpc
.rpc_call(
"invalidateblock",
serde_json::json!([hex::encode(rpc.get_block_hash(1).await.unwrap())]),
)
.await
.unwrap()
}
rpc
}
#[macro_export]
macro_rules! async_sequential {
($(async fn $name: ident() $body: block)*) => {
$(
#[tokio::test]
async fn $name() {
let guard = runner::SEQUENTIAL.lock().await;
let local = tokio::task::LocalSet::new();
local.run_until(async move {
if let Err(err) = tokio::task::spawn_local(async move { $body }).await {
drop(guard);
Err(err).unwrap()
}
}).await;
}
)*
}
}

View File

@@ -0,0 +1,348 @@
use std::collections::HashMap;
use rand_core::{RngCore, OsRng};
use transcript::{Transcript, RecommendedTranscript};
use k256::{
elliptic_curve::{
group::{ff::Field, Group},
sec1::{Tag, ToEncodedPoint},
},
Scalar, ProjectivePoint,
};
use frost::{
curve::Secp256k1,
Participant, ThresholdKeys,
tests::{THRESHOLD, key_gen, sign_without_caching},
};
use bitcoin_serai::{
bitcoin::{
hashes::Hash as HashTrait,
blockdata::opcodes::all::OP_RETURN,
script::{PushBytesBuf, Instruction, Instructions, Script},
OutPoint, TxOut, Transaction, Network, Address,
},
wallet::{tweak_keys, address, ReceivedOutput, Scanner, TransactionError, SignableTransaction},
rpc::Rpc,
};
mod runner;
use runner::rpc;
const FEE: u64 = 20;
fn is_even(key: ProjectivePoint) -> bool {
key.to_encoded_point(true).tag() == Tag::CompressedEvenY
}
async fn send_and_get_output(rpc: &Rpc, scanner: &Scanner, key: ProjectivePoint) -> ReceivedOutput {
let block_number = rpc.get_latest_block_number().await.unwrap() + 1;
rpc
.rpc_call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([1, address(Network::Regtest, key).unwrap()]),
)
.await
.unwrap();
// Mine until maturity
rpc
.rpc_call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([100, Address::p2sh(Script::empty(), Network::Regtest).unwrap()]),
)
.await
.unwrap();
let block = rpc.get_block(&rpc.get_block_hash(block_number).await.unwrap()).await.unwrap();
let mut outputs = scanner.scan_block(&block);
assert_eq!(outputs, scanner.scan_transaction(&block.txdata[0]));
assert_eq!(outputs.len(), 1);
assert_eq!(outputs[0].outpoint(), &OutPoint::new(block.txdata[0].txid(), 0));
assert_eq!(outputs[0].value(), block.txdata[0].output[0].value);
assert_eq!(
ReceivedOutput::read::<&[u8]>(&mut outputs[0].serialize().as_ref()).unwrap(),
outputs[0]
);
outputs.swap_remove(0)
}
fn keys() -> (HashMap<Participant, ThresholdKeys<Secp256k1>>, ProjectivePoint) {
let mut keys = key_gen(&mut OsRng);
for (_, keys) in keys.iter_mut() {
*keys = tweak_keys(keys);
}
let key = keys.values().next().unwrap().group_key();
(keys, key)
}
fn sign(
keys: &HashMap<Participant, ThresholdKeys<Secp256k1>>,
tx: SignableTransaction,
) -> Transaction {
let mut machines = HashMap::new();
for i in (1 ..= THRESHOLD).map(|i| Participant::new(i).unwrap()) {
machines.insert(
i,
tx.clone()
.multisig(keys[&i].clone(), RecommendedTranscript::new(b"bitcoin-serai Test Transaction"))
.unwrap(),
);
}
sign_without_caching(&mut OsRng, machines, &[])
}
#[test]
fn test_tweak_keys() {
let mut even = false;
let mut odd = false;
// Generate keys until we get an even set and an odd set
while !(even && odd) {
let mut keys = key_gen(&mut OsRng).drain().next().unwrap().1;
if is_even(keys.group_key()) {
// Tweaking should do nothing
assert_eq!(tweak_keys(&keys).group_key(), keys.group_key());
even = true;
} else {
let tweaked = tweak_keys(&keys).group_key();
assert_ne!(tweaked, keys.group_key());
// Tweaking should produce an even key
assert!(is_even(tweaked));
// Verify it uses the smallest possible offset
while keys.group_key().to_encoded_point(true).tag() == Tag::CompressedOddY {
keys = keys.offset(Scalar::ONE);
}
assert_eq!(tweaked, keys.group_key());
odd = true;
}
}
}
async_sequential! {
async fn test_scanner() {
// Test Scanners are creatable for even keys.
for _ in 0 .. 128 {
let key = ProjectivePoint::random(&mut OsRng);
assert_eq!(Scanner::new(key).is_some(), is_even(key));
}
let mut key = ProjectivePoint::random(&mut OsRng);
while !is_even(key) {
key += ProjectivePoint::GENERATOR;
}
{
let mut scanner = Scanner::new(key).unwrap();
for _ in 0 .. 128 {
let mut offset = Scalar::random(&mut OsRng);
let registered = scanner.register_offset(offset).unwrap();
// Registering this again should return None
assert!(scanner.register_offset(offset).is_none());
// We can only register offsets resulting in even keys
// Make this even
while !is_even(key + (ProjectivePoint::GENERATOR * offset)) {
offset += Scalar::ONE;
}
// Ensure it matches the registered offset
assert_eq!(registered, offset);
// Assert registering this again fails
assert!(scanner.register_offset(offset).is_none());
}
}
let rpc = rpc().await;
let mut scanner = Scanner::new(key).unwrap();
assert_eq!(send_and_get_output(&rpc, &scanner, key).await.offset(), Scalar::ZERO);
// Register an offset and test receiving to it
let offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
assert_eq!(
send_and_get_output(&rpc, &scanner, key + (ProjectivePoint::GENERATOR * offset))
.await
.offset(),
offset
);
}
async fn test_transaction_errors() {
let (_, key) = keys();
let rpc = rpc().await;
let scanner = Scanner::new(key).unwrap();
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let inputs = vec![output];
let addr = || address(Network::Regtest, key).unwrap();
let payments = vec![(addr(), 1000)];
assert!(SignableTransaction::new(inputs.clone(), &payments, None, None, FEE).is_ok());
assert_eq!(
SignableTransaction::new(vec![], &payments, None, None, FEE),
Err(TransactionError::NoInputs)
);
// No change
assert!(SignableTransaction::new(inputs.clone(), &[(addr(), 1000)], None, None, FEE).is_ok());
// Consolidation TX
assert!(SignableTransaction::new(inputs.clone(), &[], Some(addr()), None, FEE).is_ok());
// Data
assert!(SignableTransaction::new(inputs.clone(), &[], None, Some(vec![]), FEE).is_ok());
// No outputs
assert_eq!(
SignableTransaction::new(inputs.clone(), &[], None, None, FEE),
Err(TransactionError::NoOutputs),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[(addr(), 1)], None, None, FEE),
Err(TransactionError::DustPayment),
);
assert!(
SignableTransaction::new(inputs.clone(), &payments, None, Some(vec![0; 80]), FEE).is_ok()
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &payments, None, Some(vec![0; 81]), FEE),
Err(TransactionError::TooMuchData),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[(addr(), inputs[0].value() * 2)], None, None, FEE),
Err(TransactionError::NotEnoughFunds),
);
assert_eq!(
SignableTransaction::new(inputs, &vec![(addr(), 1000); 10000], None, None, 0),
Err(TransactionError::TooLargeTransaction),
);
}
async fn test_send() {
let (keys, key) = keys();
let rpc = rpc().await;
let mut scanner = Scanner::new(key).unwrap();
// Get inputs, one not offset and one offset
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
let offset_key = key + (ProjectivePoint::GENERATOR * offset);
let offset_output = send_and_get_output(&rpc, &scanner, offset_key).await;
assert_eq!(offset_output.offset(), offset);
// Declare payments, change, fee
let payments = [
(address(Network::Regtest, key).unwrap(), 1005),
(address(Network::Regtest, offset_key).unwrap(), 1007)
];
let change_offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
let change_key = key + (ProjectivePoint::GENERATOR * change_offset);
let change_addr = address(Network::Regtest, change_key).unwrap();
// Create and sign the TX
let tx = SignableTransaction::new(
vec![output.clone(), offset_output.clone()],
&payments,
Some(change_addr.clone()),
None,
FEE
).unwrap();
let needed_fee = tx.needed_fee();
let tx = sign(&keys, tx);
assert_eq!(tx.output.len(), 3);
// Ensure we can scan it
let outputs = scanner.scan_transaction(&tx);
for (o, output) in outputs.iter().enumerate() {
assert_eq!(output.outpoint(), &OutPoint::new(tx.txid(), u32::try_from(o).unwrap()));
assert_eq!(&ReceivedOutput::read::<&[u8]>(&mut output.serialize().as_ref()).unwrap(), output);
}
assert_eq!(outputs[0].offset(), Scalar::ZERO);
assert_eq!(outputs[1].offset(), offset);
assert_eq!(outputs[2].offset(), change_offset);
// Make sure the payments were properly created
for ((output, scanned), payment) in tx.output.iter().zip(outputs.iter()).zip(payments.iter()) {
assert_eq!(output, &TxOut { script_pubkey: payment.0.script_pubkey(), value: payment.1 });
assert_eq!(scanned.value(), payment.1 );
}
// Make sure the change is correct
assert_eq!(needed_fee, u64::try_from(tx.weight()).unwrap() * FEE);
let input_value = output.value() + offset_output.value();
let output_value = tx.output.iter().map(|output| output.value).sum::<u64>();
assert_eq!(input_value - output_value, needed_fee);
let change_amount =
input_value - payments.iter().map(|payment| payment.1).sum::<u64>() - needed_fee;
assert_eq!(
tx.output[2],
TxOut { script_pubkey: change_addr.script_pubkey(), value: change_amount },
);
// This also tests send_raw_transaction and get_transaction, which the RPC test can't
// effectively test
rpc.send_raw_transaction(&tx).await.unwrap();
let mut hash = *tx.txid().as_raw_hash().as_byte_array();
hash.reverse();
assert_eq!(tx, rpc.get_transaction(&hash).await.unwrap());
}
async fn test_data() {
let (keys, key) = keys();
let rpc = rpc().await;
let scanner = Scanner::new(key).unwrap();
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let data_len = 60 + usize::try_from(OsRng.next_u64() % 21).unwrap();
let mut data = vec![0; data_len];
OsRng.fill_bytes(&mut data);
let tx = sign(
&keys,
SignableTransaction::new(
vec![output],
&[],
address(Network::Regtest, key),
Some(data.clone()),
FEE
).unwrap()
);
assert!(tx.output[0].script_pubkey.is_op_return());
let check = |mut instructions: Instructions| {
assert_eq!(instructions.next().unwrap().unwrap(), Instruction::Op(OP_RETURN));
assert_eq!(
instructions.next().unwrap().unwrap(),
Instruction::PushBytes(&PushBytesBuf::try_from(data.clone()).unwrap()),
);
assert!(instructions.next().is_none());
};
check(tx.output[0].script_pubkey.instructions());
check(tx.output[0].script_pubkey.instructions_minimal());
}
}

View File

@@ -13,25 +13,25 @@ all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
hex-literal = "0.3"
thiserror = "1"
rand_core = "0.6"
serde_json = "1.0"
serde = "1.0"
serde_json = "1"
serde = "1"
sha2 = "0.10"
sha3 = "0.10"
group = "0.12"
k256 = { version = "0.11", features = ["arithmetic", "keccak256", "ecdsa"] }
group = "0.13"
k256 = { version = "^0.13.1", default-features = false, features = ["std", "arithmetic", "bits", "ecdsa"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["secp256k1", "tests"] }
eyre = "0.6"
ethers = { version = "1", features = ["abigen", "ethers-solc"] }
ethers = { version = "2", default-features = false, features = ["abigen", "ethers-solc"] }
[build-dependencies]
ethers-solc = "1"
ethers-solc = "2"
[dev-dependencies]
tokio = { version = "1", features = ["macros"] }

View File

@@ -2,7 +2,9 @@ use sha3::{Digest, Keccak256};
use group::Group;
use k256::{
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, sec1::ToEncodedPoint, DecompressPoint},
elliptic_curve::{
bigint::ArrayEncoding, ops::Reduce, point::DecompressPoint, sec1::ToEncodedPoint,
},
AffinePoint, ProjectivePoint, Scalar, U256,
};
@@ -13,7 +15,7 @@ pub fn keccak256(data: &[u8]) -> [u8; 32] {
}
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar::from_uint_reduced(U256::from_be_slice(&keccak256(data)))
Scalar::reduce(U256::from_be_slice(&keccak256(data)))
}
pub fn address(point: &ProjectivePoint) -> [u8; 20] {
@@ -56,7 +58,7 @@ impl Hram<Secp256k1> for EthereumHram {
let mut data = address(R).to_vec();
data.append(&mut a_encoded);
data.append(&mut m.to_vec());
Scalar::from_uint_reduced(U256::from_be_slice(&keccak256(&data)))
Scalar::reduce(U256::from_be_slice(&keccak256(&data)))
}
}
@@ -92,7 +94,7 @@ pub fn process_signature_for_contract(
) -> ProcessedSignature {
let encoded_pk = A.to_encoded_point(true);
let px = &encoded_pk.as_ref()[1 .. 33];
let px_scalar = Scalar::from_uint_reduced(U256::from_be_slice(px));
let px_scalar = Scalar::reduce(U256::from_be_slice(px));
let e = EthereumHram::hram(R, A, &[chain_id.to_be_byte_array().as_slice(), &m].concat());
ProcessedSignature {
s,

View File

@@ -2,7 +2,7 @@ use std::{convert::TryFrom, sync::Arc, time::Duration};
use rand_core::OsRng;
use k256::{elliptic_curve::bigint::ArrayEncoding, U256};
use ::k256::{elliptic_curve::bigint::ArrayEncoding, U256};
use ethers::{
prelude::*,
@@ -11,7 +11,8 @@ use ethers::{
use frost::{
curve::Secp256k1,
algorithm::Schnorr as Algo,
Participant,
algorithm::IetfSchnorr,
tests::{key_gen, algorithm_machines, sign},
};
@@ -44,14 +45,14 @@ async fn test_ecrecover_hack() {
let chain_id = U256::from(chain_id);
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&1].group_key();
let group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = Algo::<Secp256k1, crypto::EthereumHram>::new();
let algo = IetfSchnorr::<Secp256k1, crypto::EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
algo.clone(),

View File

@@ -1,51 +1,57 @@
use ethereum_serai::crypto::*;
use frost::curve::Secp256k1;
use k256::{
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, sec1::ToEncodedPoint},
ProjectivePoint, Scalar, U256,
};
use frost::{curve::Secp256k1, Participant};
use ethereum_serai::crypto::*;
#[test]
fn test_ecrecover() {
use k256::ecdsa::{
recoverable::Signature,
signature::{Signer, Verifier},
SigningKey, VerifyingKey,
};
use rand_core::OsRng;
use sha2::Sha256;
use sha3::{Digest, Keccak256};
use k256::ecdsa::{hazmat::SignPrimitive, signature::DigestVerifier, SigningKey, VerifyingKey};
let private = SigningKey::random(&mut OsRng);
let public = VerifyingKey::from(&private);
const MESSAGE: &[u8] = b"Hello, World!";
let sig: Signature = private.sign(MESSAGE);
public.verify(MESSAGE, &sig).unwrap();
let (sig, recovery_id) = private
.as_nonzero_scalar()
.try_sign_prehashed_rfc6979::<Sha256>(&Keccak256::digest(MESSAGE), b"")
.unwrap();
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
{
assert_eq!(public.verify_digest(Keccak256::new_with_prefix(MESSAGE), &sig).unwrap(), ());
}
assert_eq!(
ecrecover(hash_to_scalar(MESSAGE), sig.as_ref()[64], *sig.r(), *sig.s()).unwrap(),
address(&ProjectivePoint::from(public))
ecrecover(hash_to_scalar(MESSAGE), recovery_id.unwrap().is_y_odd().into(), *sig.r(), *sig.s())
.unwrap(),
address(&ProjectivePoint::from(public.as_affine()))
);
}
#[test]
fn test_signing() {
use frost::{
algorithm::Schnorr,
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let _group_key = keys[&1].group_key();
let _group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let algo = Schnorr::<Secp256k1, EthereumHram>::new();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let _sig = sign(
&mut OsRng,
algo,
keys.clone(),
algorithm_machines(&mut OsRng, Schnorr::<Secp256k1, EthereumHram>::new(), &keys),
algorithm_machines(&mut OsRng, IetfSchnorr::<Secp256k1, EthereumHram>::ietf(), &keys),
MESSAGE,
);
}
@@ -53,16 +59,16 @@ fn test_signing() {
#[test]
fn test_ecrecover_hack() {
use frost::{
algorithm::Schnorr,
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&1].group_key();
let group_key = keys[&Participant::new(1).unwrap()].group_key();
let group_key_encoded = group_key.to_encoded_point(true);
let group_key_compressed = group_key_encoded.as_ref();
let group_key_x = Scalar::from_uint_reduced(U256::from_be_slice(&group_key_compressed[1 .. 33]));
let group_key_x = Scalar::reduce(U256::from_be_slice(&group_key_compressed[1 .. 33]));
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
@@ -70,7 +76,7 @@ fn test_ecrecover_hack() {
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = Schnorr::<Secp256k1, EthereumHram>::new();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
algo.clone(),

View File

@@ -1,6 +1,6 @@
[package]
name = "monero-serai"
version = "0.1.2-alpha"
version = "0.1.4-alpha"
description = "A modern Monero transaction library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero"
@@ -12,52 +12,96 @@ all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
hex-literal = "0.3"
lazy_static = "1"
thiserror = "1"
std-shims = { path = "../../common/std-shims", version = "0.1", default-features = false }
rand_core = "0.6"
rand_chacha = { version = "0.3", optional = true }
rand = "0.8"
rand_distr = "0.4"
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", optional = true }
zeroize = { version = "1.5", features = ["zeroize_derive"] }
subtle = "2.4"
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
subtle = { version = "^2.4", default-features = false }
sha3 = "0.10"
blake2 = { version = "0.10", optional = true }
rand_core = { version = "0.6", default-features = false }
# Used to send transactions
rand = { version = "0.8", default-features = false }
rand_chacha = { version = "0.3", default-features = false }
# Used to select decoys
rand_distr = { version = "0.4", default-features = false }
curve25519-dalek = { version = "3", features = ["std"] }
crc = { version = "3", default-features = false }
sha3 = { version = "0.10", default-features = false }
group = { version = "0.12" }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.1" }
multiexp = { path = "../../crypto/multiexp", version = "0.2", features = ["batch"] }
curve25519-dalek = { version = "^3.2", default-features = false }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.2", features = ["recommended"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.5", features = ["ed25519"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.2", features = ["serialize"], optional = true }
# Used for the hash to curve, along with the more complicated proofs
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.3", default-features = false }
multiexp = { path = "../../crypto/multiexp", version = "0.3", default-features = false, features = ["batch"] }
monero-generators = { path = "generators", version = "0.1" }
# Needed for multisig
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.3", features = ["serialize"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["ed25519"], optional = true }
hex = "0.4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
monero-generators = { path = "generators", version = "0.3", default-features = false }
base58-monero = "1"
monero-epee-bin-serde = "1.0"
futures = { version = "0.3", default-features = false, features = ["alloc"], optional = true }
digest_auth = "0.3"
reqwest = { version = "0.11", features = ["json"] }
hex-literal = "0.4"
hex = { version = "0.4", default-features = false, features = ["alloc"] }
serde = { version = "1", default-features = false, features = ["derive"] }
serde_json = { version = "1", default-features = false, features = ["alloc"] }
base58-monero = { version = "1", git = "https://github.com/monero-rs/base58-monero", rev = "5045e8d2b817b3b6c1190661f504e879bc769c29", default-features = false, features = ["check"] }
# Used for the provided RPC
digest_auth = { version = "0.3", optional = true }
reqwest = { version = "0.11", features = ["json"], optional = true }
# Used for the binaries
tokio = { version = "1", features = ["full"], optional = true }
[build-dependencies]
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.1" }
monero-generators = { path = "generators", version = "0.1" }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.3", default-features = false }
monero-generators = { path = "generators", version = "0.3", default-features = false }
[dev-dependencies]
tokio = { version = "1", features = ["full"] }
monero-rpc = "0.3"
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.5", features = ["ed25519", "tests"] }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.7", features = ["tests"] }
[features]
multisig = ["rand_chacha", "blake2", "transcript", "frost", "dleq"]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"subtle/std",
"rand_core/std",
"rand_chacha/std",
"rand/std",
"rand_distr/std",
"sha3/std",
"curve25519-dalek/std",
"multiexp/std",
"monero-generators/std",
"futures/std",
"hex/std",
"serde/std",
"serde_json/std",
]
http_rpc = ["digest_auth", "reqwest"]
multisig = ["transcript", "frost", "dleq", "std"]
binaries = ["tokio"]
experimental = []
default = ["std", "http_rpc"]

View File

@@ -4,16 +4,46 @@ A modern Monero transaction library intended for usage in wallets. It prides
itself on accuracy, correctness, and removing common pit falls developers may
face.
monero-serai contains safety features, such as first-class acknowledgement of
the burning bug, yet also a high level API around creating transactions.
monero-serai also offers a FROST-based multisig, which is orders of magnitude
more performant than Monero's.
monero-serai also offers the following features:
- Featured Addresses
- A FROST-based multisig orders of magnitude more performant than Monero's
### Purpose and support
monero-serai was written for Serai, a decentralized exchange aiming to support
Monero. Despite this, monero-serai is intended to be a widely usable library,
accurate to Monero. monero-serai guarantees the functionality needed for Serai,
yet will not deprive functionality from other users, and may potentially leave
Serai's umbrella at some point.
yet will not deprive functionality from other users.
Various legacy transaction formats are not currently implemented, yet
monero-serai is still increasing its support for various transaction types.
Various legacy transaction formats are not currently implemented, yet we are
willing to add support for them. There aren't active development efforts around
them however.
### Caveats
This library DOES attempt to do the following:
- Create on-chain transactions identical to how wallet2 would (unless told not
to)
- Not be detectable as monero-serai when scanning outputs
- Not reveal spent outputs to the connected RPC node
This library DOES NOT attempt to do the following:
- Have identical RPC behavior when creating transactions
- Be a wallet
This means that monero-serai shouldn't be fingerprintable on-chain. It also
shouldn't be fingerprintable if a targeted attack occurs to detect if the
receiving wallet is monero-serai or wallet2. It also should be generally safe
for usage with remote nodes.
It won't hide from remote nodes it's monero-serai however, potentially
allowing a remote node to profile you. The implications of this are left to the
user to consider.
It also won't act as a wallet, just as a transaction library. wallet2 has
several *non-transaction-level* policies, such as always attempting to use two
inputs to create transactions. These are considered out of scope to
monero-serai.

View File

@@ -28,10 +28,10 @@ fn serialize(generators_string: &mut String, points: &[EdwardsPoint]) {
fn generators(prefix: &'static str, path: &str) {
let generators = bulletproofs_generators(prefix.as_bytes());
#[allow(non_snake_case)]
let mut G_str = "".to_string();
let mut G_str = String::new();
serialize(&mut G_str, &generators.G);
#[allow(non_snake_case)]
let mut H_str = "".to_string();
let mut H_str = String::new();
serialize(&mut H_str, &generators.H);
let path = Path::new(&env::var("OUT_DIR").unwrap()).join(path);
@@ -41,15 +41,16 @@ fn generators(prefix: &'static str, path: &str) {
.write_all(
format!(
"
lazy_static! {{
pub static ref GENERATORS: Generators = Generators {{
pub static GENERATORS_CELL: OnceLock<Generators> = OnceLock::new();
pub fn GENERATORS() -> &'static Generators {{
GENERATORS_CELL.get_or_init(|| Generators {{
G: [
{G_str}
],
H: [
{H_str}
],
}};
}})
}}
",
)

View File

@@ -1,6 +1,6 @@
[package]
name = "monero-generators"
version = "0.1.1"
version = "0.3.0"
description = "Monero's hash_to_point and generators"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero/generators"
@@ -12,13 +12,17 @@ all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
lazy_static = "1"
std-shims = { path = "../../../common/std-shims", version = "0.1", default-features = false }
subtle = "2.4"
subtle = { version = "^2.4", default-features = false }
sha3 = "0.10"
sha3 = { version = "0.10", default-features = false }
curve25519-dalek = { version = "3", features = ["std"] }
curve25519-dalek = { version = "3", default-features = false }
group = "0.12"
dalek-ff-group = { path = "../../../crypto/dalek-ff-group", version = "0.1.4" }
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../../crypto/dalek-ff-group", version = "0.3" }
[features]
std = ["std-shims/std"]
default = ["std"]

View File

@@ -3,3 +3,5 @@
Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
`hash_to_point` here, is included, as needed to generate generators.
This library is usable under no_std when the `alloc` feature is enabled.

View File

@@ -3,17 +3,18 @@ use subtle::ConditionallySelectable;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use group::ff::{Field, PrimeField};
use dalek_ff_group::field::FieldElement;
use dalek_ff_group::FieldElement;
use crate::hash;
/// Monero's hash to point function, as named `ge_fromfe_frombytes_vartime`.
#[allow(clippy::many_single_char_names)]
pub fn hash_to_point(bytes: [u8; 32]) -> EdwardsPoint {
#[allow(non_snake_case)]
#[allow(non_snake_case, clippy::unreadable_literal)]
let A = FieldElement::from(486662u64);
let v = FieldElement::from_square(hash(&bytes)).double();
let w = v + FieldElement::one();
let w = v + FieldElement::ONE;
let x = w.square() + (-A.square() * v);
// This isn't the complete X, yet its initial value

View File

@@ -1,17 +1,17 @@
//! Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
//!
//! An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
//! `hash_to_point` here, is included, as needed to generate generators.
use lazy_static::lazy_static;
#![cfg_attr(not(feature = "std"), no_std)]
use std_shims::sync::OnceLock;
use sha3::{Digest, Keccak256};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_POINT,
edwards::{EdwardsPoint as DalekPoint, CompressedEdwardsY},
};
use curve25519_dalek::edwards::{EdwardsPoint as DalekPoint, CompressedEdwardsY};
use group::Group;
use group::{Group, GroupEncoding};
use dalek_ff_group::EdwardsPoint;
mod varint;
@@ -24,13 +24,29 @@ fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
lazy_static! {
/// Monero alternate generator `H`, used for amounts in Pedersen commitments.
pub static ref H: DalekPoint =
CompressedEdwardsY(hash(&ED25519_BASEPOINT_POINT.compress().to_bytes()))
static H_CELL: OnceLock<DalekPoint> = OnceLock::new();
/// Monero's alternate generator `H`, used for amounts in Pedersen commitments.
#[allow(non_snake_case)]
pub fn H() -> DalekPoint {
*H_CELL.get_or_init(|| {
CompressedEdwardsY(hash(&EdwardsPoint::generator().to_bytes()))
.decompress()
.unwrap()
.mul_by_cofactor();
.mul_by_cofactor()
})
}
static H_POW_2_CELL: OnceLock<[DalekPoint; 64]> = OnceLock::new();
/// Monero's alternate generator `H`, multiplied by 2**i for i in 1 ..= 64.
#[allow(non_snake_case)]
pub fn H_pow_2() -> &'static [DalekPoint; 64] {
H_POW_2_CELL.get_or_init(|| {
let mut res = [H(); 64];
for i in 1 .. 64 {
res[i] = res[i - 1] + res[i - 1];
}
res
})
}
const MAX_M: usize = 16;
@@ -51,7 +67,7 @@ pub fn bulletproofs_generators(dst: &'static [u8]) -> Generators {
for i in 0 .. MAX_MN {
let i = 2 * i;
let mut even = H.compress().to_bytes().to_vec();
let mut even = H().compress().to_bytes().to_vec();
even.extend(dst);
let mut odd = even.clone();

View File

@@ -1,6 +1,8 @@
use std::io::{self, Write};
use std_shims::io::{self, Write};
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
#[allow(clippy::trivially_copy_pass_by_ref)] // &u64 is needed for API consistency
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {
let mut varint = *varint;
while {

View File

@@ -0,0 +1,155 @@
use std::sync::Arc;
use serde::Deserialize;
use serde_json::json;
use monero_serai::{
transaction::Transaction,
block::Block,
rpc::{Rpc, HttpRpc},
};
use tokio::task::JoinHandle;
async fn check_block(rpc: Arc<Rpc<HttpRpc>>, block_i: usize) {
let hash = rpc.get_block_hash(block_i).await.expect("couldn't get block {block_i}'s hash");
// TODO: Grab the JSON to also check it was deserialized correctly
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse = rpc
.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) })))
.await
.expect("couldn't get block {block} via block.hash()");
let blob = hex::decode(res.blob).expect("node returned non-hex block");
let block = Block::read(&mut blob.as_slice()).expect("couldn't deserialize block {block_i}");
assert_eq!(block.hash(), hash, "hash differs");
assert_eq!(block.serialize(), blob, "serialization differs");
let txs_len = 1 + block.txs.len();
if !block.txs.is_empty() {
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
let mut hashes_hex = block.txs.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = vec![];
while !hashes_hex.is_empty() {
let txs: TransactionsResponse = rpc
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. hashes_hex.len().min(100)).collect::<Vec<_>>(),
})),
)
.await
.expect("couldn't call get_transactions");
assert!(txs.missed_tx.is_empty());
all_txs.extend(txs.txs);
}
for (tx_hash, tx_res) in block.txs.into_iter().zip(all_txs.into_iter()) {
assert_eq!(
tx_res.tx_hash,
hex::encode(tx_hash),
"node returned a transaction with different hash"
);
let tx = Transaction::read(
&mut hex::decode(&tx_res.as_hex).expect("node returned non-hex transaction").as_slice(),
)
.expect("couldn't deserialize transaction");
assert_eq!(
hex::encode(tx.serialize()),
tx_res.as_hex,
"Transaction serialization was different"
);
assert_eq!(tx.hash(), tx_hash, "Transaction hash was different");
}
}
println!("Deserialized, hashed, and reserialized {block_i} with {} TXs", txs_len);
}
#[tokio::main]
async fn main() {
let args = std::env::args().collect::<Vec<String>>();
// Read start block as the first arg
let mut block_i = args[1].parse::<usize>().expect("invalid start block");
// How many blocks to work on at once
let async_parallelism: usize =
args.get(2).unwrap_or(&"8".to_string()).parse::<usize>().expect("invalid parallelism argument");
// Read further args as RPC URLs
let default_nodes = vec![
"http://xmr-node.cakewallet.com:18081".to_string(),
"https://node.sethforprivacy.com".to_string(),
];
let mut specified_nodes = vec![];
{
let mut i = 0;
loop {
let Some(node) = args.get(3 + i) else { break };
specified_nodes.push(node.clone());
i += 1;
}
}
let nodes = if specified_nodes.is_empty() { default_nodes } else { specified_nodes };
let rpc = |url: String| {
HttpRpc::new(url.clone())
.unwrap_or_else(|_| panic!("couldn't create HttpRpc connected to {url}"))
};
let main_rpc = rpc(nodes[0].clone());
let mut rpcs = vec![];
for i in 0 .. async_parallelism {
rpcs.push(Arc::new(rpc(nodes[i % nodes.len()].clone())));
}
let mut rpc_i = 0;
let mut handles: Vec<JoinHandle<()>> = vec![];
let mut height = 0;
loop {
let new_height = main_rpc.get_height().await.expect("couldn't call get_height");
if new_height == height {
break;
}
height = new_height;
while block_i < height {
if handles.len() >= async_parallelism {
// Guarantee one handle is complete
handles.swap_remove(0).await.unwrap();
// Remove all of the finished handles
let mut i = 0;
while i < handles.len() {
if handles[i].is_finished() {
handles.swap_remove(i).await.unwrap();
continue;
}
i += 1;
}
}
handles.push(tokio::spawn(check_block(rpcs[rpc_i].clone(), block_i)));
rpc_i = (rpc_i + 1) % rpcs.len();
block_i += 1;
}
}
}

View File

@@ -1,6 +1,19 @@
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use crate::{serialize::*, transaction::Transaction};
use crate::{
hash,
merkle::merkle_root,
serialize::*,
transaction::{Input, Transaction},
};
const CORRECT_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("426d16cff04c71f8b16340b722dc4010a2dd3831c22041431f772547ba6e331a");
const EXISTING_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("bbd604d2ba11ba27935e006ed39c9bfdd99b76bf4a50654bc1e1e61217962698");
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BlockHeader {
@@ -26,8 +39,8 @@ impl BlockHeader {
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<BlockHeader> {
Ok(BlockHeader {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
major_version: read_varint(r)?,
minor_version: read_varint(r)?,
timestamp: read_varint(r)?,
@@ -45,6 +58,13 @@ pub struct Block {
}
impl Block {
pub fn number(&self) -> usize {
match self.miner_tx.prefix.inputs.get(0) {
Some(Input::Gen(number)) => (*number).try_into().unwrap(),
_ => panic!("invalid block, miner TX didn't have a Input::Gen"),
}
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.header.write(w)?;
self.miner_tx.write(w)?;
@@ -55,14 +75,39 @@ impl Block {
Ok(())
}
fn tx_merkle_root(&self) -> [u8; 32] {
merkle_root(self.miner_tx.hash(), &self.txs)
}
fn serialize_hashable(&self) -> Vec<u8> {
let mut blob = self.header.serialize();
blob.extend_from_slice(&self.tx_merkle_root());
write_varint(&(1 + u64::try_from(self.txs.len()).unwrap()), &mut blob).unwrap();
let mut out = Vec::with_capacity(8 + blob.len());
write_varint(&u64::try_from(blob.len()).unwrap(), &mut out).unwrap();
out.append(&mut blob);
out
}
pub fn hash(&self) -> [u8; 32] {
let hash = hash(&self.serialize_hashable());
if hash == CORRECT_BLOCK_HASH_202612 {
return EXISTING_BLOCK_HASH_202612;
};
hash
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Block> {
Ok(Block {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
header: BlockHeader::read(r)?,
miner_tx: Transaction::read(r)?,
txs: (0 .. read_varint(r)?).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,

View File

@@ -1,42 +1,31 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
//! A modern Monero transaction library intended for usage in wallets. It prides
//! itself on accuracy, correctness, and removing common pit falls developers may
//! face.
#[cfg(not(feature = "std"))]
#[macro_use]
extern crate alloc;
//! monero-serai contains safety features, such as first-class acknowledgement of
//! the burning bug, yet also a high level API around creating transactions.
//! monero-serai also offers a FROST-based multisig, which is orders of magnitude
//! more performant than Monero's.
use std_shims::{sync::OnceLock, io};
//! monero-serai was written for Serai, a decentralized exchange aiming to support
//! Monero. Despite this, monero-serai is intended to be a widely usable library,
//! accurate to Monero. monero-serai guarantees the functionality needed for Serai,
//! yet will not deprive functionality from other users, and may potentially leave
//! Serai's umbrella at some point.
//! Various legacy transaction formats are not currently implemented, yet
//! monero-serai is still increasing its support for various transaction types.
use lazy_static::lazy_static;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use sha3::{Digest, Keccak256};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
edwards::{EdwardsPoint, EdwardsBasepointTable},
};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub use monero_generators::H;
mod merkle;
mod serialize;
use serialize::{read_byte, read_u16};
/// RingCT structs and functionality.
pub mod ringct;
use ringct::RctType;
/// Transaction structs.
pub mod transaction;
@@ -51,42 +40,95 @@ pub mod wallet;
#[cfg(test)]
mod tests;
/// Monero protocol version. v15 is omitted as v15 was simply v14 and v16 being active at the same
/// time, with regards to the transactions supported. Accordingly, v16 should be used during v15.
static INV_EIGHT_CELL: OnceLock<Scalar> = OnceLock::new();
#[allow(non_snake_case)]
pub(crate) fn INV_EIGHT() -> Scalar {
*INV_EIGHT_CELL.get_or_init(|| Scalar::from(8u8).invert())
}
/// Monero protocol version.
///
/// v15 is omitted as v15 was simply v14 and v16 being active at the same time, with regards to the
/// transactions supported. Accordingly, v16 should be used during v15.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[allow(non_camel_case_types)]
pub enum Protocol {
Unsupported(usize),
v14,
v16,
Custom { ring_len: usize, bp_plus: bool },
Custom { ring_len: usize, bp_plus: bool, optimal_rct_type: RctType },
}
impl Protocol {
/// Amount of ring members under this protocol version.
pub fn ring_len(&self) -> usize {
pub const fn ring_len(&self) -> usize {
match self {
Protocol::Unsupported(_) => panic!("Unsupported protocol version"),
Protocol::v14 => 11,
Protocol::v16 => 16,
Protocol::Custom { ring_len, .. } => *ring_len,
Self::v14 => 11,
Self::v16 => 16,
Self::Custom { ring_len, .. } => *ring_len,
}
}
/// Whether or not the specified version uses Bulletproofs or Bulletproofs+.
///
/// This method will likely be reworked when versions not using Bulletproofs at all are added.
pub fn bp_plus(&self) -> bool {
pub const fn bp_plus(&self) -> bool {
match self {
Protocol::Unsupported(_) => panic!("Unsupported protocol version"),
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { bp_plus, .. } => *bp_plus,
Self::v14 => false,
Self::v16 => true,
Self::Custom { bp_plus, .. } => *bp_plus,
}
}
}
lazy_static! {
static ref H_TABLE: EdwardsBasepointTable = EdwardsBasepointTable::create(&H);
// TODO: Make this an Option when we support pre-RCT protocols
pub const fn optimal_rct_type(&self) -> RctType {
match self {
Self::v14 => RctType::Clsag,
Self::v16 => RctType::BulletproofsPlus,
Self::Custom { optimal_rct_type, .. } => *optimal_rct_type,
}
}
pub(crate) fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::v14 => w.write_all(&[0, 14]),
Self::v16 => w.write_all(&[0, 16]),
Self::Custom { ring_len, bp_plus, optimal_rct_type } => {
// Custom, version 0
w.write_all(&[1, 0])?;
w.write_all(&u16::try_from(*ring_len).unwrap().to_le_bytes())?;
w.write_all(&[u8::from(*bp_plus)])?;
w.write_all(&[optimal_rct_type.to_byte()])
}
}
}
pub(crate) fn read<R: io::Read>(r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
// Monero protocol
0 => match read_byte(r)? {
14 => Self::v14,
16 => Self::v16,
_ => Err(io::Error::new(io::ErrorKind::Other, "unrecognized monero protocol"))?,
},
// Custom
1 => match read_byte(r)? {
0 => Self::Custom {
ring_len: read_u16(r)?.into(),
bp_plus: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid bool serialization"))?,
},
optimal_rct_type: RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid RctType serialization"))?,
},
_ => {
Err(io::Error::new(io::ErrorKind::Other, "unrecognized custom protocol serialization"))?
}
},
_ => Err(io::Error::new(io::ErrorKind::Other, "unrecognized protocol serialization"))?,
})
}
}
/// Transparent structure representing a Pedersen commitment's contents.
@@ -98,18 +140,18 @@ pub struct Commitment {
}
impl Commitment {
/// The zero commitment, defined as a mask of 1 (as to not be the identity) and a 0 amount.
pub fn zero() -> Commitment {
Commitment { mask: Scalar::one(), amount: 0 }
/// A commitment to zero, defined with a mask of 1 (as to not be the identity).
pub fn zero() -> Self {
Self { mask: Scalar::one(), amount: 0 }
}
pub fn new(mask: Scalar, amount: u64) -> Commitment {
Commitment { mask, amount }
pub fn new(mask: Scalar, amount: u64) -> Self {
Self { mask, amount }
}
/// Calculate a Pedersen commitment, as a point, from the transparent structure.
pub fn calculate(&self) -> EdwardsPoint {
(&self.mask * &ED25519_BASEPOINT_TABLE) + (&Scalar::from(self.amount) * &*H_TABLE)
(&self.mask * &ED25519_BASEPOINT_TABLE) + (Scalar::from(self.amount) * H())
}
}

View File

@@ -0,0 +1,55 @@
use std_shims::vec::Vec;
use crate::hash;
pub(crate) fn merkle_root(root: [u8; 32], leafs: &[[u8; 32]]) -> [u8; 32] {
match leafs.len() {
0 => root,
1 => hash(&[root, leafs[0]].concat()),
_ => {
let mut hashes = Vec::with_capacity(1 + leafs.len());
hashes.push(root);
hashes.extend(leafs);
// Monero preprocess this so the length is a power of 2
let mut high_pow_2 = 4; // 4 is the lowest value this can be
while high_pow_2 < hashes.len() {
high_pow_2 *= 2;
}
let low_pow_2 = high_pow_2 / 2;
// Merge right-most hashes until we're at the low_pow_2
{
let overage = hashes.len() - low_pow_2;
let mut rightmost = hashes.drain((low_pow_2 - overage) ..);
// This is true since we took overage from beneath and above low_pow_2, taking twice as
// many elements as overage
debug_assert_eq!(rightmost.len() % 2, 0);
let mut paired_hashes = Vec::with_capacity(overage);
while let Some(left) = rightmost.next() {
let right = rightmost.next().unwrap();
paired_hashes.push(hash(&[left.as_ref(), &right].concat()));
}
drop(rightmost);
hashes.extend(paired_hashes);
assert_eq!(hashes.len(), low_pow_2);
}
// Do a traditional pairing off
let mut new_hashes = Vec::with_capacity(hashes.len() / 2);
while hashes.len() > 1 {
let mut i = 0;
while i < hashes.len() {
new_hashes.push(hash(&[hashes[i], hashes[i + 1]].concat()));
i += 2;
}
hashes = new_hashes;
new_hashes = Vec::with_capacity(hashes.len() / 2);
}
hashes[0]
}
}
}

View File

@@ -0,0 +1,102 @@
use core::fmt::Debug;
use std_shims::io::{self, Read, Write};
use curve25519_dalek::edwards::EdwardsPoint;
#[cfg(feature = "experimental")]
use curve25519_dalek::{traits::Identity, scalar::Scalar};
#[cfg(feature = "experimental")]
use monero_generators::H_pow_2;
#[cfg(feature = "experimental")]
use crate::hash_to_scalar;
use crate::serialize::*;
/// 64 Borromean ring signatures.
///
/// This type keeps the data as raw bytes as Monero has some transactions with unreduced scalars in
/// this field. While we could use `from_bytes_mod_order`, we'd then not be able to encode this
/// back into it's original form.
///
/// Those scalars also have a custom reduction algorithm...
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanSignatures {
pub s0: [[u8; 32]; 64],
pub s1: [[u8; 32]; 64],
pub ee: [u8; 32],
}
impl BorromeanSignatures {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { s0: read_array(read_bytes, r)?, s1: read_array(read_bytes, r)?, ee: read_bytes(r)? })
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for s0 in &self.s0 {
w.write_all(s0)?;
}
for s1 in &self.s1 {
w.write_all(s1)?;
}
w.write_all(&self.ee)
}
#[cfg(feature = "experimental")]
fn verify(&self, keys_a: &[EdwardsPoint], keys_b: &[EdwardsPoint]) -> bool {
let mut transcript = [0; 2048];
for i in 0 .. 64 {
// TODO: These aren't the correct reduction
// TODO: Can either of these be tightened?
#[allow(non_snake_case)]
let LL = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&Scalar::from_bytes_mod_order(self.ee),
&keys_a[i],
&Scalar::from_bytes_mod_order(self.s0[i]),
);
#[allow(non_snake_case)]
let LV = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&hash_to_scalar(LL.compress().as_bytes()),
&keys_b[i],
&Scalar::from_bytes_mod_order(self.s1[i]),
);
transcript[i .. ((i + 1) * 32)].copy_from_slice(LV.compress().as_bytes());
}
// TODO: This isn't the correct reduction
// TODO: Can this be tightened to from_canonical_bytes?
hash_to_scalar(&transcript) == Scalar::from_bytes_mod_order(self.ee)
}
}
/// A range proof premised on Borromean ring signatures.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanRange {
pub sigs: BorromeanSignatures,
pub bit_commitments: [EdwardsPoint; 64],
}
impl BorromeanRange {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { sigs: BorromeanSignatures::read(r)?, bit_commitments: read_array(read_point, r)? })
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.sigs.write(w)?;
write_raw_vec(write_point, &self.bit_commitments, w)
}
#[cfg(feature = "experimental")]
#[must_use]
pub fn verify(&self, commitment: &EdwardsPoint) -> bool {
if &self.bit_commitments.iter().sum::<EdwardsPoint>() != commitment {
return false;
}
#[allow(non_snake_case)]
let H_pow_2 = H_pow_2();
let mut commitments_sub_one = [EdwardsPoint::identity(); 64];
for i in 0 .. 64 {
commitments_sub_one[i] = self.bit_commitments[i] - H_pow_2[i];
}
self.sigs.verify(&self.bit_commitments, &commitments_sub_one)
}
}

View File

@@ -1,7 +1,5 @@
// Required to be for this entire file, which isn't an issue, as it wouldn't bind to the static
#![allow(non_upper_case_globals)]
use std_shims::{vec::Vec, sync::OnceLock};
use lazy_static::lazy_static;
use rand_core::{RngCore, CryptoRng};
use subtle::{Choice, ConditionallySelectable};
@@ -15,13 +13,17 @@ use multiexp::multiexp as multiexp_const;
pub(crate) use monero_generators::Generators;
use crate::{H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
use crate::{INV_EIGHT as DALEK_INV_EIGHT, H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::*;
// Bring things into ff/group
lazy_static! {
pub(crate) static ref INV_EIGHT: Scalar = Scalar::from(8u8).invert().unwrap();
pub(crate) static ref H: EdwardsPoint = EdwardsPoint(*DALEK_H);
#[inline]
pub(crate) fn INV_EIGHT() -> Scalar {
Scalar(DALEK_INV_EIGHT())
}
#[inline]
pub(crate) fn H() -> EdwardsPoint {
EdwardsPoint(DALEK_H())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
@@ -34,7 +36,7 @@ pub(crate) const LOG_N: usize = 6; // 2 << 6 == N
pub(crate) const N: usize = 64;
pub(crate) fn prove_multiexp(pairs: &[(Scalar, EdwardsPoint)]) -> EdwardsPoint {
multiexp_const(pairs) * *INV_EIGHT
multiexp_const(pairs) * INV_EIGHT()
}
pub(crate) fn vector_exponent(
@@ -48,7 +50,7 @@ pub(crate) fn vector_exponent(
pub(crate) fn hash_cache(cache: &mut Scalar, mash: &[[u8; 32]]) -> Scalar {
let slice =
&[cache.to_bytes().as_ref(), mash.iter().cloned().flatten().collect::<Vec<_>>().as_ref()]
&[cache.to_bytes().as_ref(), mash.iter().copied().flatten().collect::<Vec<_>>().as_ref()]
.concat();
*cache = hash_to_scalar(slice);
*cache
@@ -76,12 +78,10 @@ pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, Scalar
for j in 0 .. M {
for i in (0 .. N).rev() {
let mut bit = Choice::from(0);
if j < sv.len() {
bit = Choice::from((sv[j][i / 8] >> (i % 8)) & 1);
}
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::zero(), &Scalar::one(), bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::one(), &Scalar::zero(), bit);
let bit =
if j < sv.len() { Choice::from((sv[j][i / 8] >> (i % 8)) & 1) } else { Choice::from(0) };
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::ZERO, &Scalar::ONE, bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::ONE, &Scalar::ZERO, bit);
}
}
@@ -91,7 +91,7 @@ pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, Scalar
pub(crate) fn hash_commitments<C: IntoIterator<Item = DalekPoint>>(
commitments: C,
) -> (Scalar, Vec<EdwardsPoint>) {
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * *INV_EIGHT).collect::<Vec<_>>();
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * INV_EIGHT()).collect::<Vec<_>>();
(hash_to_scalar(&V.iter().flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>()), V)
}
@@ -102,7 +102,7 @@ pub(crate) fn alpha_rho<R: RngCore + CryptoRng>(
aR: &ScalarVector,
) -> (Scalar, EdwardsPoint) {
let ar = Scalar::random(rng);
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * *INV_EIGHT)
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * INV_EIGHT())
}
pub(crate) fn LR_statements(
@@ -116,20 +116,21 @@ pub(crate) fn LR_statements(
let mut res = a
.0
.iter()
.cloned()
.zip(G_i.iter().cloned())
.chain(b.0.iter().cloned().zip(H_i.iter().cloned()))
.copied()
.zip(G_i.iter().copied())
.chain(b.0.iter().copied().zip(H_i.iter().copied()))
.collect::<Vec<_>>();
res.push((cL, U));
res
}
lazy_static! {
pub(crate) static ref TWO_N: ScalarVector = ScalarVector::powers(Scalar::from(2u8), N);
static TWO_N_CELL: OnceLock<ScalarVector> = OnceLock::new();
pub(crate) fn TWO_N() -> &'static ScalarVector {
TWO_N_CELL.get_or_init(|| ScalarVector::powers(Scalar::from(2u8), N))
}
pub(crate) fn challenge_products(w: &[Scalar], winv: &[Scalar]) -> Vec<Scalar> {
let mut products = vec![Scalar::zero(); 1 << w.len()];
let mut products = vec![Scalar::ZERO; 1 << w.len()];
products[0] = winv[0];
products[1] = w[0];
for j in 1 .. w.len() {

View File

@@ -1,6 +1,9 @@
#![allow(non_snake_case)]
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
@@ -37,6 +40,7 @@ impl Bulletproofs {
pub(crate) fn fee_weight(plus: bool, outputs: usize) -> usize {
let fields = if plus { 6 } else { 9 };
// TODO: Shouldn't this use u32/u64?
#[allow(non_snake_case)]
let mut LR_len = usize::try_from(usize::BITS - (outputs - 1).leading_zeros()).unwrap();
let padded_outputs = 1 << LR_len;
@@ -58,14 +62,14 @@ impl Bulletproofs {
rng: &mut R,
outputs: &[Commitment],
plus: bool,
) -> Result<Bulletproofs, TransactionError> {
) -> Result<Self, TransactionError> {
if outputs.len() > MAX_OUTPUTS {
return Err(TransactionError::TooManyOutputs)?;
}
Ok(if !plus {
Bulletproofs::Original(OriginalStruct::prove(rng, outputs))
Self::Plus(PlusStruct::prove(rng, outputs))
} else {
Bulletproofs::Plus(PlusStruct::prove(rng, outputs))
Self::Original(OriginalStruct::prove(rng, outputs))
})
}
@@ -73,8 +77,8 @@ impl Bulletproofs {
#[must_use]
pub fn verify<R: RngCore + CryptoRng>(&self, rng: &mut R, commitments: &[EdwardsPoint]) -> bool {
match self {
Bulletproofs::Original(bp) => bp.verify(rng, commitments),
Bulletproofs::Plus(bp) => bp.verify(rng, commitments),
Self::Original(bp) => bp.verify(rng, commitments),
Self::Plus(bp) => bp.verify(rng, commitments),
}
}
@@ -90,8 +94,8 @@ impl Bulletproofs {
commitments: &[EdwardsPoint],
) -> bool {
match self {
Bulletproofs::Original(bp) => bp.batch_verify(rng, verifier, id, commitments),
Bulletproofs::Plus(bp) => bp.batch_verify(rng, verifier, id, commitments),
Self::Original(bp) => bp.batch_verify(rng, verifier, id, commitments),
Self::Plus(bp) => bp.batch_verify(rng, verifier, id, commitments),
}
}
@@ -101,7 +105,7 @@ impl Bulletproofs {
specific_write_vec: F,
) -> io::Result<()> {
match self {
Bulletproofs::Original(bp) => {
Self::Original(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.S, w)?;
write_point(&bp.T1, w)?;
@@ -115,7 +119,7 @@ impl Bulletproofs {
write_scalar(&bp.t, w)
}
Bulletproofs::Plus(bp) => {
Self::Plus(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.A1, w)?;
write_point(&bp.B, w)?;
@@ -143,8 +147,8 @@ impl Bulletproofs {
}
/// Read Bulletproofs.
pub fn read<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
Ok(Bulletproofs::Original(OriginalStruct {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self::Original(OriginalStruct {
A: read_point(r)?,
S: read_point(r)?,
T1: read_point(r)?,
@@ -160,8 +164,8 @@ impl Bulletproofs {
}
/// Read Bulletproofs+.
pub fn read_plus<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
Ok(Bulletproofs::Plus(PlusStruct {
pub fn read_plus<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self::Plus(PlusStruct {
A: read_point(r)?,
A1: read_point(r)?,
B: read_point(r)?,

View File

@@ -1,4 +1,5 @@
use lazy_static::lazy_static;
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
@@ -14,9 +15,9 @@ use crate::{Commitment, ringct::bulletproofs::core::*};
include!(concat!(env!("OUT_DIR"), "/generators.rs"));
lazy_static! {
static ref ONE_N: ScalarVector = ScalarVector(vec![Scalar::one(); N]);
static ref IP12: Scalar = inner_product(&ONE_N, &TWO_N);
static IP12_CELL: OnceLock<Scalar> = OnceLock::new();
pub(crate) fn IP12() -> Scalar {
*IP12_CELL.get_or_init(|| inner_product(&ScalarVector(vec![Scalar::ONE; N]), TWO_N()))
}
#[derive(Clone, PartialEq, Eq, Debug)]
@@ -35,10 +36,8 @@ pub struct OriginalStruct {
}
impl OriginalStruct {
pub(crate) fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
commitments: &[Commitment],
) -> OriginalStruct {
#[allow(clippy::many_single_char_names)]
pub(crate) fn prove<R: RngCore + CryptoRng>(rng: &mut R, commitments: &[Commitment]) -> Self {
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
@@ -48,8 +47,9 @@ impl OriginalStruct {
let (sL, sR) =
ScalarVector((0 .. (MN * 2)).map(|_| Scalar::random(&mut *rng)).collect::<Vec<_>>()).split();
let (mut alpha, A) = alpha_rho(&mut *rng, &GENERATORS, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, &GENERATORS, &sL, &sR);
let generators = GENERATORS();
let (mut alpha, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, generators, &sL, &sR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes(), S.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
@@ -62,7 +62,7 @@ impl OriginalStruct {
let zpow = ScalarVector::powers(z, M + 2);
for j in 0 .. M {
for i in 0 .. N {
zero_twos.push(zpow[j + 2] * TWO_N[i]);
zero_twos.push(zpow[j + 2] * TWO_N()[i]);
}
}
@@ -77,8 +77,8 @@ impl OriginalStruct {
let mut tau1 = Scalar::random(&mut *rng);
let mut tau2 = Scalar::random(&mut *rng);
let T1 = prove_multiexp(&[(t1, *H), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, *H), (tau2, EdwardsPoint::generator())]);
let T1 = prove_multiexp(&[(t1, H()), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, H()), (tau2, EdwardsPoint::generator())]);
let x =
hash_cache(&mut cache, &[z.to_bytes(), T1.compress().to_bytes(), T2.compress().to_bytes()]);
@@ -112,10 +112,10 @@ impl OriginalStruct {
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = GENERATORS.G[.. a.len()].to_vec();
let mut H_proof = GENERATORS.H[.. a.len()].to_vec();
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
H_proof.iter_mut().zip(yinvpow.0.iter()).for_each(|(this_H, yinvpow)| *this_H *= yinvpow);
let U = *H * x_ip;
let U = H() * x_ip;
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
@@ -132,8 +132,8 @@ impl OriginalStruct {
let L_i = prove_multiexp(&LR_statements(&aL, G_R, &bR, H_L, cL, U));
let R_i = prove_multiexp(&LR_statements(&aR, G_L, &bL, H_R, cR, U));
L.push(L_i);
R.push(R_i);
L.push(*L_i);
R.push(*R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
@@ -147,15 +147,15 @@ impl OriginalStruct {
}
}
let res = OriginalStruct {
let res = Self {
A: *A,
S: *S,
T1: *T1,
T2: *T2,
taux: *taux,
mu: *mu,
L: L.drain(..).map(|L| *L).collect(),
R: R.drain(..).map(|R| *R).collect(),
L,
R,
a: *a[0],
b: *b[0],
t: *t,
@@ -164,6 +164,7 @@ impl OriginalStruct {
res
}
#[allow(clippy::many_single_char_names)]
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
@@ -188,7 +189,7 @@ impl OriginalStruct {
}
// Rebuild all challenges
let (mut cache, commitments) = hash_commitments(commitments.iter().cloned());
let (mut cache, commitments) = hash_commitments(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes(), self.S.compress().to_bytes()]);
let z = hash_to_scalar(&y.to_bytes());
@@ -221,7 +222,7 @@ impl OriginalStruct {
let A = normalize(&self.A);
let S = normalize(&self.S);
let commitments = commitments.iter().map(|c| c.mul_by_cofactor()).collect::<Vec<_>>();
let commitments = commitments.iter().map(EdwardsPoint::mul_by_cofactor).collect::<Vec<_>>();
// Verify it
let mut proof = Vec::with_capacity(4 + commitments.len());
@@ -230,10 +231,10 @@ impl OriginalStruct {
let ip1y = ScalarVector::powers(y, M * N).sum();
let mut k = -(zpow[2] * ip1y);
for j in 1 ..= M {
k -= zpow[j + 2] * *IP12;
k -= zpow[j + 2] * IP12();
}
let y1 = Scalar(self.t) - ((z * ip1y) + k);
proof.push((-y1, *H));
proof.push((-y1, H()));
proof.push((-Scalar(self.taux), G));
@@ -247,10 +248,10 @@ impl OriginalStruct {
proof = Vec::with_capacity(4 + (2 * (MN + logMN)));
let z3 = (Scalar(self.t) - (Scalar(self.a) * Scalar(self.b))) * x_ip;
proof.push((z3, *H));
proof.push((z3, H()));
proof.push((-Scalar(self.mu), G));
proof.push((Scalar::one(), A));
proof.push((Scalar::ONE, A));
proof.push((x, S));
{
@@ -260,13 +261,14 @@ impl OriginalStruct {
let w_cache = challenge_products(&w, &winv);
let generators = GENERATORS();
for i in 0 .. MN {
let g = (Scalar(self.a) * w_cache[i]) + z;
proof.push((-g, GENERATORS.G[i]));
proof.push((-g, generators.G[i]));
let mut h = Scalar(self.b) * yinvpow[i] * w_cache[(!i) & (MN - 1)];
h -= ((zpow[(i / N) + 2] * TWO_N[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, GENERATORS.H[i]));
h -= ((zpow[(i / N) + 2] * TWO_N()[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, generators.H[i]));
}
}

View File

@@ -1,4 +1,5 @@
use lazy_static::lazy_static;
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
@@ -17,24 +18,26 @@ use crate::{
include!(concat!(env!("OUT_DIR"), "/generators_plus.rs"));
lazy_static! {
static ref TRANSCRIPT: [u8; 32] =
EdwardsPoint(raw_hash_to_point(hash(b"bulletproof_plus_transcript"))).compress().to_bytes();
static TRANSCRIPT_CELL: OnceLock<[u8; 32]> = OnceLock::new();
pub(crate) fn TRANSCRIPT() -> [u8; 32] {
*TRANSCRIPT_CELL.get_or_init(|| {
EdwardsPoint(raw_hash_to_point(hash(b"bulletproof_plus_transcript"))).compress().to_bytes()
})
}
// TRANSCRIPT isn't a Scalar, so we need this alternative for the first hash
fn hash_plus<C: IntoIterator<Item = DalekPoint>>(commitments: C) -> (Scalar, Vec<EdwardsPoint>) {
let (cache, commitments) = hash_commitments(commitments);
(hash_to_scalar(&[&*TRANSCRIPT as &[u8], &cache.to_bytes()].concat()), commitments)
(hash_to_scalar(&[TRANSCRIPT().as_ref(), &cache.to_bytes()].concat()), commitments)
}
// d[j*N+i] = z**(2*(j+1)) * 2**i
fn d(z: Scalar, M: usize, MN: usize) -> (ScalarVector, ScalarVector) {
let zpow = ScalarVector::even_powers(z, 2 * M);
let mut d = vec![Scalar::zero(); MN];
let mut d = vec![Scalar::ZERO; MN];
for j in 0 .. M {
for i in 0 .. N {
d[(j * N) + i] = zpow[j] * TWO_N[i];
d[(j * N) + i] = zpow[j] * TWO_N()[i];
}
}
(zpow, ScalarVector(d))
@@ -53,16 +56,16 @@ pub struct PlusStruct {
}
impl PlusStruct {
pub(crate) fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
commitments: &[Commitment],
) -> PlusStruct {
#[allow(clippy::many_single_char_names)]
pub(crate) fn prove<R: RngCore + CryptoRng>(rng: &mut R, commitments: &[Commitment]) -> Self {
let generators = GENERATORS();
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
let commitments_points = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
let (mut cache, _) = hash_plus(commitments_points.clone());
let (mut alpha1, A) = alpha_rho(&mut *rng, &GENERATORS, &aL, &aR);
let (mut alpha1, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
@@ -87,8 +90,8 @@ impl PlusStruct {
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = GENERATORS.G[.. a.len()].to_vec();
let mut H_proof = GENERATORS.H[.. a.len()].to_vec();
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
@@ -105,15 +108,15 @@ impl PlusStruct {
let (G_L, G_R) = G_proof.split_at(aL.len());
let (H_L, H_R) = H_proof.split_at(aL.len());
let mut L_i = LR_statements(&(&aL * yinvpow[aL.len()]), G_R, &bR, H_L, cL, *H);
let mut L_i = LR_statements(&(&aL * yinvpow[aL.len()]), G_R, &bR, H_L, cL, H());
L_i.push((dL, G));
let L_i = prove_multiexp(&L_i);
L.push(L_i);
L.push(*L_i);
let mut R_i = LR_statements(&(&aR * ypow[aR.len()]), G_L, &bL, H_R, cR, *H);
let mut R_i = LR_statements(&(&aR * ypow[aR.len()]), G_L, &bL, H_R, cR, H());
R_i.push((dR, G));
let R_i = prove_multiexp(&R_i);
R.push(R_i);
R.push(*R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
@@ -139,9 +142,9 @@ impl PlusStruct {
(r, G_proof[0]),
(s, H_proof[0]),
(d, G),
((r * y * b[0]) + (s * y * a[0]), *H),
((r * y * b[0]) + (s * y * a[0]), H()),
]);
let B = prove_multiexp(&[(r * y * s, *H), (eta, G)]);
let B = prove_multiexp(&[(r * y * s, H()), (eta, G)]);
let e = hash_cache(&mut cache, &[A1.compress().to_bytes(), B.compress().to_bytes()]);
let r1 = (a[0] * e) + r;
@@ -153,20 +156,12 @@ impl PlusStruct {
eta.zeroize();
alpha1.zeroize();
let res = PlusStruct {
A: *A,
A1: *A1,
B: *B,
r1: *r1,
s1: *s1,
d1: *d1,
L: L.drain(..).map(|L| *L).collect(),
R: R.drain(..).map(|R| *R).collect(),
};
let res = Self { A: *A, A1: *A1, B: *B, r1: *r1, s1: *s1, d1: *d1, L, R };
debug_assert!(res.verify(rng, &commitments_points));
res
}
#[allow(clippy::many_single_char_names)]
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
@@ -191,7 +186,7 @@ impl PlusStruct {
}
// Rebuild all challenges
let (mut cache, commitments) = hash_plus(commitments.iter().cloned());
let (mut cache, commitments) = hash_plus(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes()]);
let yinv = y.invert().unwrap();
let z = hash_to_scalar(&y.to_bytes());
@@ -215,8 +210,6 @@ impl PlusStruct {
let A1 = normalize(&self.A1);
let B = normalize(&self.B);
let mut commitments = commitments.iter().map(|c| c.mul_by_cofactor()).collect::<Vec<_>>();
// Verify it
let mut proof = Vec::with_capacity(logMN + 5 + (2 * (MN + logMN)));
@@ -232,14 +225,14 @@ impl PlusStruct {
let esq = e * e;
let minus_esq = -esq;
let commitment_weight = minus_esq * yMNy;
for (i, commitment) in commitments.drain(..).enumerate() {
for (i, commitment) in commitments.iter().map(EdwardsPoint::mul_by_cofactor).enumerate() {
proof.push((commitment_weight * zpow[i], commitment));
}
// Invert B, instead of the Scalar, as the latter is only 2x as expensive yet enables reduction
// to a single addition under vartime for the first BP verified in the batch, which is expected
// to be much more significant
proof.push((Scalar::one(), -B));
proof.push((Scalar::ONE, -B));
proof.push((-e, A1));
proof.push((minus_esq, A));
proof.push((Scalar(self.d1), G));
@@ -248,7 +241,7 @@ impl PlusStruct {
let y_sum = weighted_powers(y, MN).sum();
proof.push((
Scalar(self.r1 * y.0 * self.s1) + (esq * ((yMNy * z * d_sum) + ((zsq - z) * y_sum))),
*H,
H(),
));
let w_cache = challenge_products(&w, &winv);
@@ -259,11 +252,12 @@ impl PlusStruct {
let minus_esq_z = -esq_z;
let mut minus_esq_y = minus_esq * yMN;
let generators = GENERATORS();
for i in 0 .. MN {
proof.push((e_r1_y * w_cache[i] + esq_z, GENERATORS.G[i]));
proof.push((e_r1_y * w_cache[i] + esq_z, generators.G[i]));
proof.push((
(e_s1 * w_cache[(!i) & (MN - 1)]) + minus_esq_z + (minus_esq_y * d[i]),
GENERATORS.H[i],
generators.H[i],
));
e_r1_y *= yinv;

View File

@@ -1,4 +1,5 @@
use core::ops::{Add, Sub, Mul, Index};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
@@ -12,9 +13,9 @@ pub(crate) struct ScalarVector(pub(crate) Vec<Scalar>);
macro_rules! math_op {
($Op: ident, $op: ident, $f: expr) => {
impl $Op<Scalar> for ScalarVector {
type Output = ScalarVector;
fn $op(self, b: Scalar) -> ScalarVector {
ScalarVector(self.0.iter().map(|a| $f((a, &b))).collect())
type Output = Self;
fn $op(self, b: Scalar) -> Self {
Self(self.0.iter().map(|a| $f((a, &b))).collect())
}
}
@@ -26,16 +27,16 @@ macro_rules! math_op {
}
impl $Op<ScalarVector> for ScalarVector {
type Output = ScalarVector;
fn $op(self, b: ScalarVector) -> ScalarVector {
type Output = Self;
fn $op(self, b: Self) -> Self {
debug_assert_eq!(self.len(), b.len());
ScalarVector(self.0.iter().zip(b.0.iter()).map($f).collect())
Self(self.0.iter().zip(b.0.iter()).map($f).collect())
}
}
impl $Op<&ScalarVector> for &ScalarVector {
impl $Op<Self> for &ScalarVector {
type Output = ScalarVector;
fn $op(self, b: &ScalarVector) -> ScalarVector {
fn $op(self, b: Self) -> ScalarVector {
debug_assert_eq!(self.len(), b.len());
ScalarVector(self.0.iter().zip(b.0.iter()).map($f).collect())
}
@@ -47,28 +48,28 @@ math_op!(Sub, sub, |(a, b): (&Scalar, &Scalar)| *a - *b);
math_op!(Mul, mul, |(a, b): (&Scalar, &Scalar)| *a * *b);
impl ScalarVector {
pub(crate) fn new(len: usize) -> ScalarVector {
ScalarVector(vec![Scalar::zero(); len])
pub(crate) fn new(len: usize) -> Self {
Self(vec![Scalar::ZERO; len])
}
pub(crate) fn powers(x: Scalar, len: usize) -> ScalarVector {
pub(crate) fn powers(x: Scalar, len: usize) -> Self {
debug_assert!(len != 0);
let mut res = Vec::with_capacity(len);
res.push(Scalar::one());
res.push(Scalar::ONE);
for i in 1 .. len {
res.push(res[i - 1] * x);
}
ScalarVector(res)
Self(res)
}
pub(crate) fn even_powers(x: Scalar, pow: usize) -> ScalarVector {
pub(crate) fn even_powers(x: Scalar, pow: usize) -> Self {
debug_assert!(pow != 0);
// Verify pow is a power of two
debug_assert_eq!(((pow - 1) & pow), 0);
let xsq = x * x;
let mut res = ScalarVector(Vec::with_capacity(pow / 2));
let mut res = Self(Vec::with_capacity(pow / 2));
res.0.push(xsq);
let mut prev = 2;
@@ -88,9 +89,9 @@ impl ScalarVector {
self.0.len()
}
pub(crate) fn split(self) -> (ScalarVector, ScalarVector) {
pub(crate) fn split(self) -> (Self, Self) {
let (l, r) = self.0.split_at(self.0.len() / 2);
(ScalarVector(l.to_vec()), ScalarVector(r.to_vec()))
(Self(l.to_vec()), Self(r.to_vec()))
}
}
@@ -118,7 +119,7 @@ impl Mul<&[EdwardsPoint]> for &ScalarVector {
type Output = EdwardsPoint;
fn mul(self, b: &[EdwardsPoint]) -> EdwardsPoint {
debug_assert_eq!(self.len(), b.len());
multiexp(&self.0.iter().cloned().zip(b.iter().cloned()).collect::<Vec<_>>())
multiexp(&self.0.iter().copied().zip(b.iter().copied()).collect::<Vec<_>>())
}
}

View File

@@ -1,14 +1,14 @@
#![allow(non_snake_case)]
use core::ops::Deref;
use std::io::{self, Read, Write};
use lazy_static::lazy_static;
use thiserror::Error;
use rand_core::{RngCore, CryptoRng};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use subtle::{ConstantTimeEq, Choice, CtOption};
use rand_core::{RngCore, CryptoRng};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
@@ -18,8 +18,8 @@ use curve25519_dalek::{
};
use crate::{
Commitment, random_scalar, hash_to_scalar, wallet::decoys::Decoys, ringct::hash_to_point,
serialize::*,
INV_EIGHT, Commitment, random_scalar, hash_to_scalar, wallet::decoys::Decoys,
ringct::hash_to_point, serialize::*,
};
#[cfg(feature = "multisig")]
@@ -29,28 +29,25 @@ pub use multisig::{ClsagDetails, ClsagAddendum, ClsagMultisig};
#[cfg(feature = "multisig")]
pub(crate) use multisig::add_key_image_share;
lazy_static! {
static ref INV_EIGHT: Scalar = Scalar::from(8u8).invert();
}
/// Errors returned when CLSAG signing fails.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)]
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum ClsagError {
#[error("internal error ({0})")]
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[error("invalid ring")]
#[cfg_attr(feature = "std", error("invalid ring"))]
InvalidRing,
#[error("invalid ring member (member {0}, ring size {1})")]
#[cfg_attr(feature = "std", error("invalid ring member (member {0}, ring size {1})"))]
InvalidRingMember(u8, u8),
#[error("invalid commitment")]
#[cfg_attr(feature = "std", error("invalid commitment"))]
InvalidCommitment,
#[error("invalid key image")]
#[cfg_attr(feature = "std", error("invalid key image"))]
InvalidImage,
#[error("invalid D")]
#[cfg_attr(feature = "std", error("invalid D"))]
InvalidD,
#[error("invalid s")]
#[cfg_attr(feature = "std", error("invalid s"))]
InvalidS,
#[error("invalid c1")]
#[cfg_attr(feature = "std", error("invalid c1"))]
InvalidC1,
}
@@ -64,7 +61,7 @@ pub struct ClsagInput {
}
impl ClsagInput {
pub fn new(commitment: Commitment, decoys: Decoys) -> Result<ClsagInput, ClsagError> {
pub fn new(commitment: Commitment, decoys: Decoys) -> Result<Self, ClsagError> {
let n = decoys.len();
if n > u8::MAX.into() {
Err(ClsagError::InternalError("max ring size in this library is u8 max"))?;
@@ -79,7 +76,7 @@ impl ClsagInput {
Err(ClsagError::InvalidCommitment)?;
}
Ok(ClsagInput { commitment, decoys })
Ok(Self { commitment, decoys })
}
}
@@ -103,7 +100,7 @@ fn core(
let n = ring.len();
let images_precomp = VartimeEdwardsPrecomputation::new([I, D]);
let D = D * *INV_EIGHT;
let D = D * INV_EIGHT();
// Generate the transcript
// Instead of generating multiple, a single transcript is created and then edited as needed
@@ -184,7 +181,7 @@ fn core(
let L = (&s[i] * &ED25519_BASEPOINT_TABLE) + (c_p * P[i]) + (c_c * C[i]);
let PH = hash_to_point(P[i]);
// Shouldn't be an issue as all of the variables in this vartime statement are public
let R = (s[i] * PH) + images_precomp.vartime_multiscalar_mul(&[c_p, c_c]);
let R = (s[i] * PH) + images_precomp.vartime_multiscalar_mul([c_p, c_c]);
to_hash.truncate(((2 * n) + 3) * 32);
to_hash.extend(L.compress().to_bytes());
@@ -207,6 +204,7 @@ pub struct Clsag {
impl Clsag {
// Sign core is the extension of core as needed for signing, yet is shared between single signer
// and multisig, hence why it's still core
#[allow(clippy::many_single_char_names)]
pub(crate) fn sign_core<R: RngCore + CryptoRng>(
rng: &mut R,
I: &EdwardsPoint,
@@ -215,7 +213,7 @@ impl Clsag {
msg: &[u8; 32],
A: EdwardsPoint,
AH: EdwardsPoint,
) -> (Clsag, EdwardsPoint, Scalar, Scalar) {
) -> (Self, EdwardsPoint, Scalar, Scalar) {
let r: usize = input.decoys.i.into();
let pseudo_out = Commitment::new(mask, input.commitment.amount).calculate();
@@ -230,7 +228,7 @@ impl Clsag {
let ((D, p, c), c1) =
core(&input.decoys.ring, I, &pseudo_out, msg, &D, &s, Mode::Sign(r, A, AH));
(Clsag { D, s, c1 }, pseudo_out, p, c * z)
(Self { D, s, c1 }, pseudo_out, p, c * z)
}
/// Generate CLSAG signatures for the given inputs.
@@ -241,19 +239,20 @@ impl Clsag {
mut inputs: Vec<(Zeroizing<Scalar>, EdwardsPoint, ClsagInput)>,
sum_outputs: Scalar,
msg: [u8; 32],
) -> Vec<(Clsag, EdwardsPoint)> {
) -> Vec<(Self, EdwardsPoint)> {
let mut res = Vec::with_capacity(inputs.len());
let mut sum_pseudo_outs = Scalar::zero();
for i in 0 .. inputs.len() {
let mut mask = random_scalar(rng);
if i == (inputs.len() - 1) {
mask = sum_outputs - sum_pseudo_outs;
let mask = if i == (inputs.len() - 1) {
sum_outputs - sum_pseudo_outs
} else {
let mask = random_scalar(rng);
sum_pseudo_outs += mask;
}
mask
};
let mut nonce = Zeroizing::new(random_scalar(rng));
let (mut clsag, pseudo_out, p, c) = Clsag::sign_core(
let (mut clsag, pseudo_out, p, c) = Self::sign_core(
rng,
&inputs[i].1,
&inputs[i].2,
@@ -320,7 +319,7 @@ impl Clsag {
write_point(&self.D, w)
}
pub fn read<R: Read>(decoys: usize, r: &mut R) -> io::Result<Clsag> {
Ok(Clsag { s: read_raw_vec(read_scalar, decoys, r)?, c1: read_scalar(r)?, D: read_point(r)? })
pub fn read<R: Read>(decoys: usize, r: &mut R) -> io::Result<Self> {
Ok(Self { s: read_raw_vec(read_scalar, decoys, r)?, c1: read_scalar(r)?, D: read_point(r)? })
}
}

View File

@@ -1,8 +1,9 @@
use core::{ops::Deref, fmt::Debug};
use std::{
use std_shims::{
sync::Arc,
io::{self, Read, Write},
sync::{Arc, RwLock},
};
use std::sync::RwLock;
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
@@ -23,7 +24,7 @@ use dleq::DLEqProof;
use frost::{
dkg::lagrange,
curve::Ed25519,
FrostError, ThresholdKeys, ThresholdView,
Participant, FrostError, ThresholdKeys, ThresholdView,
algorithm::{WriteAddendum, Algorithm},
};
@@ -50,7 +51,7 @@ impl ClsagInput {
// if in use
transcript.append_message(b"member", [u8::try_from(i).expect("ring size exceeded 255")]);
transcript.append_message(b"key", pair[0].compress().to_bytes());
transcript.append_message(b"commitment", pair[1].compress().to_bytes())
transcript.append_message(b"commitment", pair[1].compress().to_bytes());
}
// Doesn't include the commitment's parts as the above ring + index includes the commitment
@@ -67,8 +68,8 @@ pub struct ClsagDetails {
}
impl ClsagDetails {
pub fn new(input: ClsagInput, mask: Scalar) -> ClsagDetails {
ClsagDetails { input, mask }
pub fn new(input: ClsagInput, mask: Scalar) -> Self {
Self { input, mask }
}
}
@@ -118,8 +119,8 @@ impl ClsagMultisig {
transcript: RecommendedTranscript,
output_key: EdwardsPoint,
details: Arc<RwLock<Option<ClsagDetails>>>,
) -> ClsagMultisig {
ClsagMultisig {
) -> Self {
Self {
transcript,
H: hash_to_point(output_key),
@@ -145,8 +146,8 @@ pub(crate) fn add_key_image_share(
image: &mut EdwardsPoint,
generator: EdwardsPoint,
offset: Scalar,
included: &[u16],
participant: u16,
included: &[Participant],
participant: Participant,
share: EdwardsPoint,
) {
if image.is_identity() {
@@ -202,7 +203,7 @@ impl Algorithm<Ed25519> for ClsagMultisig {
fn process_addendum(
&mut self,
view: &ThresholdView<Ed25519>,
l: u16,
l: Participant,
addendum: ClsagAddendum,
) -> Result<(), FrostError> {
if self.image.is_identity() {
@@ -211,7 +212,7 @@ impl Algorithm<Ed25519> for ClsagMultisig {
self.transcript.append_message(b"mask", self.mask().to_bytes());
}
self.transcript.append_message(b"participant", l.to_be_bytes());
self.transcript.append_message(b"participant", l.to_bytes());
addendum
.dleq
@@ -304,7 +305,7 @@ impl Algorithm<Ed25519> for ClsagMultisig {
Ok(vec![
(share, dfg::EdwardsPoint::generator()),
(dfg::Scalar(interim.p), verification_share),
(-dfg::Scalar::one(), nonces[0][0]),
(-dfg::Scalar::ONE, nonces[0][0]),
])
}
}

View File

@@ -0,0 +1,72 @@
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use curve25519_dalek::scalar::Scalar;
#[cfg(feature = "experimental")]
use curve25519_dalek::edwards::EdwardsPoint;
use crate::serialize::*;
#[cfg(feature = "experimental")]
use crate::{hash_to_scalar, ringct::hash_to_point};
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Mlsag {
pub ss: Vec<[Scalar; 2]>,
pub cc: Scalar,
}
impl Mlsag {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for ss in &self.ss {
write_raw_vec(write_scalar, ss, w)?;
}
write_scalar(&self.cc, w)
}
pub fn read<R: Read>(mixins: usize, r: &mut R) -> io::Result<Self> {
Ok(Self {
ss: (0 .. mixins).map(|_| read_array(read_scalar, r)).collect::<Result<_, _>>()?,
cc: read_scalar(r)?,
})
}
#[cfg(feature = "experimental")]
#[must_use]
pub fn verify(
&self,
msg: &[u8; 32],
ring: &[[EdwardsPoint; 2]],
key_image: &EdwardsPoint,
) -> bool {
if ring.is_empty() {
return false;
}
let mut buf = Vec::with_capacity(6 * 32);
let mut ci = self.cc;
for (i, ring_member) in ring.iter().enumerate() {
buf.extend_from_slice(msg);
#[allow(non_snake_case)]
let L =
|r| EdwardsPoint::vartime_double_scalar_mul_basepoint(&ci, &ring_member[r], &self.ss[i][r]);
buf.extend_from_slice(ring_member[0].compress().as_bytes());
buf.extend_from_slice(L(0).compress().as_bytes());
#[allow(non_snake_case)]
let R = (self.ss[i][0] * hash_to_point(ring_member[0])) + (ci * key_image);
buf.extend_from_slice(R.compress().as_bytes());
buf.extend_from_slice(ring_member[1].compress().as_bytes());
buf.extend_from_slice(L(1).compress().as_bytes());
ci = hash_to_scalar(&buf);
buf.clear();
}
ci == self.cc
}
}

View File

@@ -1,22 +1,29 @@
use core::ops::Deref;
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroizing;
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub(crate) mod hash_to_point;
pub use hash_to_point::{raw_hash_to_point, hash_to_point};
/// MLSAG struct, along with verifying functionality.
pub mod mlsag;
/// CLSAG struct, along with signing and verifying functionality.
pub mod clsag;
/// BorromeanRange struct, along with verifying functionality.
pub mod borromean;
/// Bulletproofs(+) structs, along with proving and verifying functionality.
pub mod bulletproofs;
use crate::{
Protocol,
serialize::*,
ringct::{clsag::Clsag, bulletproofs::Bulletproofs},
ringct::{mlsag::Mlsag, clsag::Clsag, borromean::BorromeanRange, bulletproofs::Bulletproofs},
};
/// Generate a key image for a given key. Defined as `x * hash_to_point(xG)`.
@@ -24,10 +31,90 @@ pub fn generate_key_image(secret: &Zeroizing<Scalar>) -> EdwardsPoint {
hash_to_point(&ED25519_BASEPOINT_TABLE * secret.deref()) * secret.deref()
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum EncryptedAmount {
Original { mask: [u8; 32], amount: [u8; 32] },
Compact { amount: [u8; 8] },
}
impl EncryptedAmount {
pub fn read<R: Read>(compact: bool, r: &mut R) -> io::Result<Self> {
Ok(if compact {
Self::Compact { amount: read_bytes(r)? }
} else {
Self::Original { mask: read_bytes(r)?, amount: read_bytes(r)? }
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Self::Original { mask, amount } => {
w.write_all(mask)?;
w.write_all(amount)
}
Self::Compact { amount } => w.write_all(amount),
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum RctType {
/// No RCT proofs.
Null,
/// One MLSAG for a single input and a Borromean range proof (RCTTypeFull).
MlsagAggregate,
// One MLSAG for each input and a Borromean range proof (RCTTypeSimple).
MlsagIndividual,
// One MLSAG for each input and a Bulletproof (RCTTypeBulletproof).
Bulletproofs,
/// One MLSAG for each input and a Bulletproof, yet starting to use EncryptedAmount::Compact
/// (RCTTypeBulletproof2).
BulletproofsCompactAmount,
/// One CLSAG for each input and a Bulletproof (RCTTypeCLSAG).
Clsag,
/// One CLSAG for each input and a Bulletproof+ (RCTTypeBulletproofPlus).
BulletproofsPlus,
}
impl RctType {
pub fn to_byte(self) -> u8 {
match self {
Self::Null => 0,
Self::MlsagAggregate => 1,
Self::MlsagIndividual => 2,
Self::Bulletproofs => 3,
Self::BulletproofsCompactAmount => 4,
Self::Clsag => 5,
Self::BulletproofsPlus => 6,
}
}
pub fn from_byte(byte: u8) -> Option<Self> {
Some(match byte {
0 => Self::Null,
1 => Self::MlsagAggregate,
2 => Self::MlsagIndividual,
3 => Self::Bulletproofs,
4 => Self::BulletproofsCompactAmount,
5 => Self::Clsag,
6 => Self::BulletproofsPlus,
_ => None?,
})
}
pub fn compact_encrypted_amounts(&self) -> bool {
match self {
Self::Null | Self::MlsagAggregate | Self::MlsagIndividual | Self::Bulletproofs => false,
Self::BulletproofsCompactAmount | Self::Clsag | Self::BulletproofsPlus => true,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctBase {
pub fee: u64,
pub ecdh_info: Vec<[u8; 8]>,
pub pseudo_outs: Vec<EdwardsPoint>,
pub encrypted_amounts: Vec<EncryptedAmount>,
pub commitments: Vec<EdwardsPoint>,
}
@@ -36,30 +123,63 @@ impl RctBase {
1 + 8 + (outputs * (8 + 32))
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: u8) -> io::Result<()> {
w.write_all(&[rct_type])?;
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
w.write_all(&[rct_type.to_byte()])?;
match rct_type {
0 => Ok(()),
5 | 6 => {
RctType::Null => Ok(()),
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag |
RctType::BulletproofsPlus => {
write_varint(&self.fee, w)?;
for ecdh in &self.ecdh_info {
w.write_all(ecdh)?;
if rct_type == RctType::MlsagIndividual {
write_raw_vec(write_point, &self.pseudo_outs, w)?;
}
for encrypted_amount in &self.encrypted_amounts {
encrypted_amount.write(w)?;
}
write_raw_vec(write_point, &self.commitments, w)
}
_ => panic!("Serializing unknown RctType's Base"),
}
}
pub fn read<R: Read>(outputs: usize, r: &mut R) -> io::Result<(RctBase, u8)> {
let rct_type = read_byte(r)?;
pub fn read<R: Read>(inputs: usize, outputs: usize, r: &mut R) -> io::Result<(Self, RctType)> {
let rct_type = RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid RCT type"))?;
match rct_type {
RctType::Null | RctType::MlsagAggregate | RctType::MlsagIndividual => {}
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag |
RctType::BulletproofsPlus => {
if outputs == 0 {
// Because the Bulletproofs(+) layout must be canonical, there must be 1 Bulletproof if
// Bulletproofs are in use
// If there are Bulletproofs, there must be a matching amount of outputs, implicitly
// banning 0 outputs
// Since HF 12 (CLSAG being 13), a 2-output minimum has also been enforced
Err(io::Error::new(io::ErrorKind::Other, "RCT with Bulletproofs(+) had 0 outputs"))?;
}
}
}
Ok((
if rct_type == 0 {
RctBase { fee: 0, ecdh_info: vec![], commitments: vec![] }
if rct_type == RctType::Null {
Self { fee: 0, pseudo_outs: vec![], encrypted_amounts: vec![], commitments: vec![] }
} else {
RctBase {
Self {
fee: read_varint(r)?,
ecdh_info: (0 .. outputs).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
pseudo_outs: if rct_type == RctType::MlsagIndividual {
read_raw_vec(read_point, inputs, r)?
} else {
vec![]
},
encrypted_amounts: (0 .. outputs)
.map(|_| EncryptedAmount::read(rct_type.compact_encrypted_amounts(), r))
.collect::<Result<_, _>>()?,
commitments: read_raw_vec(read_point, outputs, r)?,
}
},
@@ -71,67 +191,110 @@ impl RctBase {
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum RctPrunable {
Null,
Clsag { bulletproofs: Vec<Bulletproofs>, clsags: Vec<Clsag>, pseudo_outs: Vec<EdwardsPoint> },
MlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsags: Vec<Mlsag>,
},
MlsagBulletproofs {
bulletproofs: Bulletproofs,
mlsags: Vec<Mlsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
Clsag {
bulletproofs: Bulletproofs,
clsags: Vec<Clsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
}
impl RctPrunable {
/// RCT Type byte for a given RctPrunable struct.
pub fn rct_type(&self) -> u8 {
match self {
RctPrunable::Null => 0,
RctPrunable::Clsag { bulletproofs, .. } => {
if matches!(bulletproofs[0], Bulletproofs::Original { .. }) {
5
} else {
6
}
}
}
}
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
1 + Bulletproofs::fee_weight(protocol.bp_plus(), outputs) +
(inputs * (Clsag::fee_weight(protocol.ring_len()) + 32))
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
match self {
RctPrunable::Null => Ok(()),
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs, .. } => {
write_vec(Bulletproofs::write, bulletproofs, w)?;
Self::Null => Ok(()),
Self::MlsagBorromean { borromean, mlsags } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
write_raw_vec(Mlsag::write, mlsags, w)
}
Self::MlsagBulletproofs { bulletproofs, mlsags, pseudo_outs } => {
if rct_type == RctType::Bulletproofs {
w.write_all(&1u32.to_le_bytes())?;
} else {
w.write_all(&[1])?;
}
bulletproofs.write(w)?;
write_raw_vec(Mlsag::write, mlsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
Self::Clsag { bulletproofs, clsags, pseudo_outs } => {
w.write_all(&[1])?;
bulletproofs.write(w)?;
write_raw_vec(Clsag::write, clsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
}
}
pub fn serialize(&self) -> Vec<u8> {
pub fn serialize(&self, rct_type: RctType) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
self.write(&mut serialized, rct_type).unwrap();
serialized
}
pub fn read<R: Read>(rct_type: u8, decoys: &[usize], r: &mut R) -> io::Result<RctPrunable> {
pub fn read<R: Read>(
rct_type: RctType,
decoys: &[usize],
outputs: usize,
r: &mut R,
) -> io::Result<Self> {
Ok(match rct_type {
0 => RctPrunable::Null,
5 | 6 => RctPrunable::Clsag {
bulletproofs: read_vec(
if rct_type == 5 { Bulletproofs::read } else { Bulletproofs::read_plus },
r,
)?,
RctType::Null => Self::Null,
RctType::MlsagAggregate | RctType::MlsagIndividual => Self::MlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsags: decoys.iter().map(|d| Mlsag::read(*d, r)).collect::<Result<_, _>>()?,
},
RctType::Bulletproofs | RctType::BulletproofsCompactAmount => Self::MlsagBulletproofs {
bulletproofs: {
if (if rct_type == RctType::Bulletproofs {
u64::from(read_u32(r)?)
} else {
read_varint(r)?
}) != 1
{
Err(io::Error::new(io::ErrorKind::Other, "n bulletproofs instead of one"))?;
}
Bulletproofs::read(r)?
},
mlsags: decoys.iter().map(|d| Mlsag::read(*d, r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, decoys.len(), r)?,
},
RctType::Clsag | RctType::BulletproofsPlus => Self::Clsag {
bulletproofs: {
if read_varint(r)? != 1 {
Err(io::Error::new(io::ErrorKind::Other, "n bulletproofs instead of one"))?;
}
(if rct_type == RctType::Clsag { Bulletproofs::read } else { Bulletproofs::read_plus })(
r,
)?
},
clsags: (0 .. decoys.len()).map(|o| Clsag::read(decoys[o], r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, decoys.len(), r)?,
},
_ => Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown RCT type"))?,
})
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
RctPrunable::Null => panic!("Serializing RctPrunable::Null for a signature"),
RctPrunable::Clsag { bulletproofs, .. } => {
bulletproofs.iter().try_for_each(|bp| bp.signature_write(w))
}
Self::Null => panic!("Serializing RctPrunable::Null for a signature"),
Self::MlsagBorromean { borromean, .. } => borromean.iter().try_for_each(|rs| rs.write(w)),
Self::MlsagBulletproofs { bulletproofs, .. } => bulletproofs.signature_write(w),
Self::Clsag { bulletproofs, .. } => bulletproofs.signature_write(w),
}
}
}
@@ -143,13 +306,68 @@ pub struct RctSignatures {
}
impl RctSignatures {
/// RctType for a given RctSignatures struct.
pub fn rct_type(&self) -> RctType {
match &self.prunable {
RctPrunable::Null => RctType::Null,
RctPrunable::MlsagBorromean { .. } => {
/*
This type of RctPrunable may have no outputs, yet pseudo_outs are per input
This will only be a valid RctSignatures if it's for a TX with inputs
That makes this valid for any valid RctSignatures
While it will be invalid for any invalid RctSignatures, potentially letting an invalid
MlsagAggregate be interpreted as a valid MlsagIndividual (or vice versa), they have
incompatible deserializations
This means it's impossible to receive a MlsagAggregate over the wire and interpret it
as a MlsagIndividual (or vice versa)
That only makes manual manipulation unsafe, which will always be true since these fields
are all pub
TODO: Consider making them private with read-only accessors?
*/
if self.base.pseudo_outs.is_empty() {
RctType::MlsagAggregate
} else {
RctType::MlsagIndividual
}
}
// RctBase ensures there's at least one output, making the following
// inferences guaranteed/expects impossible on any valid RctSignatures
RctPrunable::MlsagBulletproofs { .. } => {
if matches!(
self
.base
.encrypted_amounts
.get(0)
.expect("MLSAG with Bulletproofs didn't have any outputs"),
EncryptedAmount::Original { .. }
) {
RctType::Bulletproofs
} else {
RctType::BulletproofsCompactAmount
}
}
RctPrunable::Clsag { bulletproofs, .. } => {
if matches!(bulletproofs, Bulletproofs::Original { .. }) {
RctType::Clsag
} else {
RctType::BulletproofsPlus
}
}
}
}
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
RctBase::fee_weight(outputs) + RctPrunable::fee_weight(protocol, inputs, outputs)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.base.write(w, self.prunable.rct_type())?;
self.prunable.write(w)
let rct_type = self.rct_type();
self.base.write(w, rct_type)?;
self.prunable.write(w, rct_type)
}
pub fn serialize(&self) -> Vec<u8> {
@@ -158,8 +376,8 @@ impl RctSignatures {
serialized
}
pub fn read<R: Read>(decoys: Vec<usize>, outputs: usize, r: &mut R) -> io::Result<RctSignatures> {
let base = RctBase::read(outputs, r)?;
Ok(RctSignatures { base: base.0, prunable: RctPrunable::read(base.1, &decoys, r)? })
pub fn read<R: Read>(decoys: Vec<usize>, outputs: usize, r: &mut R) -> io::Result<Self> {
let base = RctBase::read(decoys.len(), outputs, r)?;
Ok(Self { base: base.0, prunable: RctPrunable::read(base.1, &decoys, outputs, r)? })
}
}

View File

@@ -1,517 +0,0 @@
use std::fmt::Debug;
use thiserror::Error;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use serde::{Serialize, Deserialize, de::DeserializeOwned};
use serde_json::{Value, json};
use digest_auth::AuthContext;
use reqwest::{Client, RequestBuilder};
use crate::{
Protocol,
transaction::{Input, Timelock, Transaction},
block::Block,
wallet::Fee,
};
#[derive(Deserialize, Debug)]
pub struct EmptyResponse {}
#[derive(Deserialize, Debug)]
pub struct JsonRpcResponse<T> {
result: T,
}
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
block_height: Option<usize>,
as_hex: String,
pruned_as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum RpcError {
#[error("internal error ({0})")]
InternalError(&'static str),
#[error("connection error")]
ConnectionError,
#[error("invalid node")]
InvalidNode,
#[error("transactions not found")]
TransactionsNotFound(Vec<[u8; 32]>),
#[error("invalid point ({0})")]
InvalidPoint(String),
#[error("pruned transaction")]
PrunedTransaction,
#[error("invalid transaction ({0:?})")]
InvalidTransaction([u8; 32]),
}
fn rpc_hex(value: &str) -> Result<Vec<u8>, RpcError> {
hex::decode(value).map_err(|_| RpcError::InvalidNode)
}
fn hash_hex(hash: &str) -> Result<[u8; 32], RpcError> {
rpc_hex(hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
fn rpc_point(point: &str) -> Result<EdwardsPoint, RpcError> {
CompressedEdwardsY(
rpc_hex(point)?.try_into().map_err(|_| RpcError::InvalidPoint(point.to_string()))?,
)
.decompress()
.ok_or_else(|| RpcError::InvalidPoint(point.to_string()))
}
#[derive(Clone, Debug)]
pub struct Rpc {
client: Client,
userpass: Option<(String, String)>,
url: String,
}
impl Rpc {
/// Create a new RPC connection.
/// A daemon requiring authentication can be used via including the username and password in the
/// URL.
pub fn new(mut url: String) -> Result<Rpc, RpcError> {
// Parse out the username and password
let userpass = if url.contains('@') {
let url_clone = url.clone();
let split_url = url_clone.split('@').collect::<Vec<_>>();
if split_url.len() != 2 {
Err(RpcError::InvalidNode)?;
}
let mut userpass = split_url[0];
url = split_url[1].to_string();
// If there was additionally a protocol string, restore that to the daemon URL
if userpass.contains("://") {
let split_userpass = userpass.split("://").collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
url = split_userpass[0].to_string() + "://" + &url;
userpass = split_userpass[1];
}
let split_userpass = userpass.split(':').collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
Some((split_userpass[0].to_string(), split_userpass[1].to_string()))
} else {
None
};
Ok(Rpc { client: Client::new(), userpass, url })
}
/// Perform a RPC call to the specified method with the provided parameters.
/// This is NOT a JSON-RPC call, which use a method of "json_rpc" and are available via
/// `json_rpc_call`.
pub async fn rpc_call<Params: Serialize + Debug, Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Params>,
) -> Result<Response, RpcError> {
let mut builder = self.client.post(self.url.clone() + "/" + method);
if let Some(params) = params.as_ref() {
builder = builder.json(params);
}
self.call_tail(method, builder).await
}
/// Perform a JSON-RPC call to the specified method with the provided parameters
pub async fn json_rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Value>,
) -> Result<Response, RpcError> {
let mut req = json!({ "method": method });
if let Some(params) = params {
req.as_object_mut().unwrap().insert("params".into(), params);
}
Ok(self.rpc_call::<_, JsonRpcResponse<Response>>("json_rpc", Some(req)).await?.result)
}
/// Perform a binary call to the specified method with the provided parameters.
pub async fn bin_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Vec<u8>,
) -> Result<Response, RpcError> {
let builder = self.client.post(self.url.clone() + "/" + method).body(params.clone());
self.call_tail(method, builder.header("Content-Type", "application/octet-stream")).await
}
async fn call_tail<Response: DeserializeOwned + Debug>(
&self,
method: &str,
mut builder: RequestBuilder,
) -> Result<Response, RpcError> {
if let Some((user, pass)) = &self.userpass {
let req = self.client.post(&self.url).send().await.map_err(|_| RpcError::InvalidNode)?;
// Only provide authentication if this daemon actually expects it
if let Some(header) = req.headers().get("www-authenticate") {
builder = builder.header(
"Authorization",
digest_auth::parse(header.to_str().map_err(|_| RpcError::InvalidNode)?)
.map_err(|_| RpcError::InvalidNode)?
.respond(&AuthContext::new_post::<_, _, _, &[u8]>(
user,
pass,
"/".to_string() + method,
None,
))
.map_err(|_| RpcError::InvalidNode)?
.to_header_string(),
);
}
}
let res = builder.send().await.map_err(|_| RpcError::ConnectionError)?;
Ok(if !method.ends_with(".bin") {
serde_json::from_str(&res.text().await.map_err(|_| RpcError::ConnectionError)?)
.map_err(|_| RpcError::InternalError("Failed to parse JSON response"))?
} else {
monero_epee_bin_serde::from_bytes(&res.bytes().await.map_err(|_| RpcError::ConnectionError)?)
.map_err(|_| RpcError::InternalError("Failed to parse binary response"))?
})
}
/// Get the active blockchain protocol version.
pub async fn get_protocol(&self) -> Result<Protocol, RpcError> {
#[derive(Deserialize, Debug)]
struct ProtocolResponse {
major_version: usize,
}
#[derive(Deserialize, Debug)]
struct LastHeaderResponse {
block_header: ProtocolResponse,
}
Ok(
match self
.json_rpc_call::<LastHeaderResponse>("get_last_block_header", None)
.await?
.block_header
.major_version
{
13 | 14 => Protocol::v14,
15 | 16 => Protocol::v16,
version => Protocol::Unsupported(version),
},
)
}
pub async fn get_height(&self) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct HeightResponse {
height: usize,
}
Ok(self.rpc_call::<Option<()>, HeightResponse>("get_height", None).await?.height)
}
pub async fn get_transactions(&self, hashes: &[[u8; 32]]) -> Result<Vec<Transaction>, RpcError> {
if hashes.is_empty() {
return Ok(vec![]);
}
let txs: TransactionsResponse = self
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes.iter().map(hex::encode).collect::<Vec<_>>()
})),
)
.await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
txs
.txs
.iter()
.map(|res| {
let tx = Transaction::read::<&[u8]>(
&mut rpc_hex(if !res.as_hex.is_empty() { &res.as_hex } else { &res.pruned_as_hex })?
.as_ref(),
)
.map_err(|_| match hash_hex(&res.tx_hash) {
Ok(hash) => RpcError::InvalidTransaction(hash),
Err(err) => err,
})?;
// https://github.com/monero-project/monero/issues/8311
if res.as_hex.is_empty() {
match tx.prefix.inputs.get(0) {
Some(Input::Gen { .. }) => (),
_ => Err(RpcError::PrunedTransaction)?,
}
}
Ok(tx)
})
.collect()
}
pub async fn get_transaction(&self, tx: [u8; 32]) -> Result<Transaction, RpcError> {
self.get_transactions(&[tx]).await.map(|mut txs| txs.swap_remove(0))
}
pub async fn get_transaction_block_number(&self, tx: &[u8]) -> Result<Option<usize>, RpcError> {
let txs: TransactionsResponse =
self.rpc_call("get_transactions", Some(json!({ "txs_hashes": [hex::encode(tx)] }))).await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
Ok(txs.txs[0].block_height)
}
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
#[derive(Deserialize, Debug)]
struct BlockHeaderResponse {
hash: String,
}
#[derive(Deserialize, Debug)]
struct BlockHeaderByHeightResponse {
block_header: BlockHeaderResponse,
}
let header: BlockHeaderByHeightResponse =
self.json_rpc_call("get_block_header_by_height", Some(json!({ "height": number }))).await?;
rpc_hex(&header.block_header.hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
pub async fn get_block(&self, hash: [u8; 32]) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await?;
Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref()).map_err(|_| RpcError::InvalidNode)
}
pub async fn get_block_by_number(&self, number: usize) -> Result<Block, RpcError> {
self.get_block(self.get_block_hash(number).await?).await
}
pub async fn get_block_transactions(&self, hash: [u8; 32]) -> Result<Vec<Transaction>, RpcError> {
let block = self.get_block(hash).await?;
let mut res = vec![block.miner_tx];
res.extend(self.get_transactions(&block.txs).await?);
Ok(res)
}
pub async fn get_block_transactions_by_number(
&self,
number: usize,
) -> Result<Vec<Transaction>, RpcError> {
self.get_block_transactions(self.get_block_hash(number).await?).await
}
/// Get the output indexes of the specified transaction.
pub async fn get_o_indexes(&self, hash: [u8; 32]) -> Result<Vec<u64>, RpcError> {
#[derive(Serialize, Debug)]
struct Request {
txid: [u8; 32],
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct OIndexes {
o_indexes: Vec<u64>,
status: String,
untrusted: bool,
credits: usize,
top_hash: String,
}
let indexes: OIndexes = self
.bin_call(
"get_o_indexes.bin",
monero_epee_bin_serde::to_bytes(&Request { txid: hash }).unwrap(),
)
.await?;
Ok(indexes.o_indexes)
}
/// Get the output distribution, from the specified height to the specified height (both
/// inclusive).
pub async fn get_output_distribution(
&self,
from: usize,
to: usize,
) -> Result<Vec<u64>, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distribution {
distribution: Vec<u64>,
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distributions {
distributions: Vec<Distribution>,
}
let mut distributions: Distributions = self
.json_rpc_call(
"get_output_distribution",
Some(json!({
"binary": false,
"amounts": [0],
"cumulative": true,
"from_height": from,
"to_height": to,
})),
)
.await?;
Ok(distributions.distributions.swap_remove(0).distribution)
}
/// Get the specified outputs from the RingCT (zero-amount) pool, but only return them if they're
/// unlocked.
pub async fn get_unlocked_outputs(
&self,
indexes: &[u64],
height: usize,
) -> Result<Vec<Option<[EdwardsPoint; 2]>>, RpcError> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
txid: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = self
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": 0,
"index": o
})).collect::<Vec<_>>()
})),
)
.await?;
let txs = self
.get_transactions(
&outs
.outs
.iter()
.map(|out| rpc_hex(&out.txid)?.try_into().map_err(|_| RpcError::InvalidNode))
.collect::<Result<Vec<_>, _>>()?,
)
.await?;
// TODO: https://github.com/serai-dex/serai/issues/104
outs
.outs
.iter()
.enumerate()
.map(|(i, out)| {
Ok(Some([rpc_point(&out.key)?, rpc_point(&out.mask)?]).filter(|_| {
match txs[i].prefix.timelock {
Timelock::Block(t_height) => t_height <= height,
_ => false,
}
}))
})
.collect()
}
/// Get the currently estimated fee from the node. This may be manipulated to unsafe levels and
/// MUST be sanity checked.
// TODO: Take a sanity check argument
pub async fn get_fee(&self) -> Result<Fee, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct FeeResponse {
fee: u64,
quantization_mask: u64,
}
let res: FeeResponse = self.json_rpc_call("get_fee_estimate", None).await?;
Ok(Fee { per_weight: res.fee, mask: res.quantization_mask })
}
pub async fn publish_transaction(&self, tx: &Transaction) -> Result<(), RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SendRawResponse {
status: String,
double_spend: bool,
fee_too_low: bool,
invalid_input: bool,
invalid_output: bool,
low_mixin: bool,
not_relayed: bool,
overspend: bool,
too_big: bool,
too_few_outputs: bool,
reason: String,
}
let mut buf = Vec::with_capacity(2048);
tx.write(&mut buf).unwrap();
let res: SendRawResponse = self
.rpc_call("send_raw_transaction", Some(json!({ "tx_as_hex": hex::encode(&buf) })))
.await?;
if res.status != "OK" {
Err(RpcError::InvalidTransaction(tx.hash()))?;
}
Ok(())
}
pub async fn generate_blocks(&self, address: &str, block_count: usize) -> Result<(), RpcError> {
self
.rpc_call::<_, EmptyResponse>(
"json_rpc",
Some(json!({
"method": "generateblocks",
"params": {
"wallet_address": address,
"amount_of_blocks": block_count
},
})),
)
.await?;
Ok(())
}
}

View File

@@ -0,0 +1,91 @@
use async_trait::async_trait;
use digest_auth::AuthContext;
use reqwest::Client;
use crate::rpc::{RpcError, RpcConnection, Rpc};
#[derive(Clone, Debug)]
pub struct HttpRpc {
client: Client,
userpass: Option<(String, String)>,
url: String,
}
impl HttpRpc {
/// Create a new HTTP(S) RPC connection.
///
/// A daemon requiring authentication can be used via including the username and password in the
/// URL.
pub fn new(mut url: String) -> Result<Rpc<Self>, RpcError> {
// Parse out the username and password
let userpass = if url.contains('@') {
let url_clone = url;
let split_url = url_clone.split('@').collect::<Vec<_>>();
if split_url.len() != 2 {
Err(RpcError::InvalidNode)?;
}
let mut userpass = split_url[0];
url = split_url[1].to_string();
// If there was additionally a protocol string, restore that to the daemon URL
if userpass.contains("://") {
let split_userpass = userpass.split("://").collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
url = split_userpass[0].to_string() + "://" + &url;
userpass = split_userpass[1];
}
let split_userpass = userpass.split(':').collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
Some((split_userpass[0].to_string(), split_userpass[1].to_string()))
} else {
None
};
Ok(Rpc(Self { client: Client::new(), userpass, url }))
}
}
#[async_trait]
impl RpcConnection for HttpRpc {
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError> {
let mut builder = self.client.post(self.url.clone() + "/" + route).body(body);
if let Some((user, pass)) = &self.userpass {
let req = self.client.post(&self.url).send().await.map_err(|_| RpcError::InvalidNode)?;
// Only provide authentication if this daemon actually expects it
if let Some(header) = req.headers().get("www-authenticate") {
builder = builder.header(
"Authorization",
digest_auth::parse(header.to_str().map_err(|_| RpcError::InvalidNode)?)
.map_err(|_| RpcError::InvalidNode)?
.respond(&AuthContext::new_post::<_, _, _, &[u8]>(
user,
pass,
"/".to_string() + route,
None,
))
.map_err(|_| RpcError::InvalidNode)?
.to_header_string(),
);
}
}
Ok(
builder
.send()
.await
.map_err(|_| RpcError::ConnectionError)?
.bytes()
.await
.map_err(|_| RpcError::ConnectionError)?
.slice(..)
.to_vec(),
)
}
}

617
coins/monero/src/rpc/mod.rs Normal file
View File

@@ -0,0 +1,617 @@
use core::fmt::Debug;
#[cfg(not(feature = "std"))]
use alloc::boxed::Box;
use std_shims::{
vec::Vec,
io,
string::{String, ToString},
};
use async_trait::async_trait;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use serde::{Serialize, Deserialize, de::DeserializeOwned};
use serde_json::{Value, json};
use crate::{
Protocol,
serialize::*,
transaction::{Input, Timelock, Transaction},
block::Block,
wallet::Fee,
};
#[cfg(feature = "http_rpc")]
mod http;
#[cfg(feature = "http_rpc")]
pub use http::*;
#[derive(Deserialize, Debug)]
pub struct EmptyResponse;
#[derive(Deserialize, Debug)]
pub struct JsonRpcResponse<T> {
result: T,
}
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
pruned_as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum RpcError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("connection error"))]
ConnectionError,
#[cfg_attr(feature = "std", error("invalid node"))]
InvalidNode,
#[cfg_attr(feature = "std", error("unsupported protocol version ({0})"))]
UnsupportedProtocol(usize),
#[cfg_attr(feature = "std", error("transactions not found"))]
TransactionsNotFound(Vec<[u8; 32]>),
#[cfg_attr(feature = "std", error("invalid point ({0})"))]
InvalidPoint(String),
#[cfg_attr(feature = "std", error("pruned transaction"))]
PrunedTransaction,
#[cfg_attr(feature = "std", error("invalid transaction ({0:?})"))]
InvalidTransaction([u8; 32]),
}
fn rpc_hex(value: &str) -> Result<Vec<u8>, RpcError> {
hex::decode(value).map_err(|_| RpcError::InvalidNode)
}
fn hash_hex(hash: &str) -> Result<[u8; 32], RpcError> {
rpc_hex(hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
fn rpc_point(point: &str) -> Result<EdwardsPoint, RpcError> {
CompressedEdwardsY(
rpc_hex(point)?.try_into().map_err(|_| RpcError::InvalidPoint(point.to_string()))?,
)
.decompress()
.ok_or_else(|| RpcError::InvalidPoint(point.to_string()))
}
// Read an EPEE VarInt, distinct from the VarInts used throughout the rest of the protocol
fn read_epee_vi<R: io::Read>(reader: &mut R) -> io::Result<u64> {
let vi_start = read_byte(reader)?;
let len = match vi_start & 0b11 {
0 => 1,
1 => 2,
2 => 4,
3 => 8,
_ => unreachable!(),
};
let mut vi = u64::from(vi_start >> 2);
for i in 1 .. len {
vi |= u64::from(read_byte(reader)?) << (((i - 1) * 8) + 6);
}
Ok(vi)
}
#[async_trait]
pub trait RpcConnection: Send + Sync + Clone + Debug {
/// Perform a POST request to the specified route with the specified body.
///
/// The implementor is left to handle anything such as authentication.
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError>;
}
// TODO: Make this provided methods for RpcConnection?
#[derive(Clone, Debug)]
pub struct Rpc<R: RpcConnection>(R);
impl<R: RpcConnection> Rpc<R> {
/// Perform a RPC call to the specified route with the provided parameters.
///
/// This is NOT a JSON-RPC call. They use a route of "json_rpc" and are available via
/// `json_rpc_call`.
pub async fn rpc_call<Params: Send + Serialize + Debug, Response: DeserializeOwned + Debug>(
&self,
route: &str,
params: Option<Params>,
) -> Result<Response, RpcError> {
serde_json::from_str(
std_shims::str::from_utf8(
&self
.0
.post(
route,
if let Some(params) = params {
serde_json::to_string(&params).unwrap().into_bytes()
} else {
vec![]
},
)
.await?,
)
.map_err(|_| RpcError::InvalidNode)?,
)
.map_err(|_| RpcError::InvalidNode)
}
/// Perform a JSON-RPC call with the specified method with the provided parameters
pub async fn json_rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Value>,
) -> Result<Response, RpcError> {
let mut req = json!({ "method": method });
if let Some(params) = params {
req.as_object_mut().unwrap().insert("params".into(), params);
}
Ok(self.rpc_call::<_, JsonRpcResponse<Response>>("json_rpc", Some(req)).await?.result)
}
/// Perform a binary call to the specified route with the provided parameters.
pub async fn bin_call(&self, route: &str, params: Vec<u8>) -> Result<Vec<u8>, RpcError> {
self.0.post(route, params).await
}
/// Get the active blockchain protocol version.
pub async fn get_protocol(&self) -> Result<Protocol, RpcError> {
#[derive(Deserialize, Debug)]
struct ProtocolResponse {
major_version: usize,
}
#[derive(Deserialize, Debug)]
struct LastHeaderResponse {
block_header: ProtocolResponse,
}
Ok(
match self
.json_rpc_call::<LastHeaderResponse>("get_last_block_header", None)
.await?
.block_header
.major_version
{
13 | 14 => Protocol::v14,
15 | 16 => Protocol::v16,
protocol => Err(RpcError::UnsupportedProtocol(protocol))?,
},
)
}
pub async fn get_height(&self) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct HeightResponse {
height: usize,
}
Ok(self.rpc_call::<Option<()>, HeightResponse>("get_height", None).await?.height)
}
pub async fn get_transactions(&self, hashes: &[[u8; 32]]) -> Result<Vec<Transaction>, RpcError> {
if hashes.is_empty() {
return Ok(vec![]);
}
let mut hashes_hex = hashes.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = Vec::with_capacity(hashes.len());
while !hashes_hex.is_empty() {
// Monero errors if more than 100 is requested unless using a non-restricted RPC
const TXS_PER_REQUEST: usize = 100;
let this_count = TXS_PER_REQUEST.min(hashes_hex.len());
let txs: TransactionsResponse = self
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. this_count).collect::<Vec<_>>(),
})),
)
.await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
all_txs.extend(txs.txs);
}
all_txs
.iter()
.enumerate()
.map(|(i, res)| {
let tx = Transaction::read::<&[u8]>(
&mut rpc_hex(if !res.as_hex.is_empty() { &res.as_hex } else { &res.pruned_as_hex })?
.as_ref(),
)
.map_err(|_| match hash_hex(&res.tx_hash) {
Ok(hash) => RpcError::InvalidTransaction(hash),
Err(err) => err,
})?;
// https://github.com/monero-project/monero/issues/8311
if res.as_hex.is_empty() {
match tx.prefix.inputs.get(0) {
Some(Input::Gen { .. }) => (),
_ => Err(RpcError::PrunedTransaction)?,
}
}
// This does run a few keccak256 hashes, which is pointless if the node is trusted
// In exchange, this provides resilience against invalid/malicious nodes
if tx.hash() != hashes[i] {
Err(RpcError::InvalidNode)?;
}
Ok(tx)
})
.collect()
}
pub async fn get_transaction(&self, tx: [u8; 32]) -> Result<Transaction, RpcError> {
self.get_transactions(&[tx]).await.map(|mut txs| txs.swap_remove(0))
}
/// Get the hash of a block from the node by the block's numbers.
/// This function does not verify the returned block hash is actually for the number in question.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
#[derive(Deserialize, Debug)]
struct BlockHeaderResponse {
hash: String,
}
#[derive(Deserialize, Debug)]
struct BlockHeaderByHeightResponse {
block_header: BlockHeaderResponse,
}
let header: BlockHeaderByHeightResponse =
self.json_rpc_call("get_block_header_by_height", Some(json!({ "height": number }))).await?;
rpc_hex(&header.block_header.hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
/// Get a block from the node by its hash.
/// This function does not verify the returned block actually has the hash in question.
pub async fn get_block(&self, hash: [u8; 32]) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await?;
let block =
Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref()).map_err(|_| RpcError::InvalidNode)?;
if block.hash() != hash {
Err(RpcError::InvalidNode)?;
}
Ok(block)
}
pub async fn get_block_by_number(&self, number: usize) -> Result<Block, RpcError> {
match self.get_block(self.get_block_hash(number).await?).await {
Ok(block) => {
// Make sure this is actually the block for this number
match block.miner_tx.prefix.inputs.get(0) {
Some(Input::Gen(actual)) => {
if usize::try_from(*actual).unwrap() == number {
Ok(block)
} else {
Err(RpcError::InvalidNode)
}
}
Some(Input::ToKey { .. }) | None => Err(RpcError::InvalidNode),
}
}
e => e,
}
}
pub async fn get_block_transactions(&self, hash: [u8; 32]) -> Result<Vec<Transaction>, RpcError> {
let block = self.get_block(hash).await?;
let mut res = vec![block.miner_tx];
res.extend(self.get_transactions(&block.txs).await?);
Ok(res)
}
pub async fn get_block_transactions_by_number(
&self,
number: usize,
) -> Result<Vec<Transaction>, RpcError> {
self.get_block_transactions(self.get_block_hash(number).await?).await
}
/// Get the output indexes of the specified transaction.
pub async fn get_o_indexes(&self, hash: [u8; 32]) -> Result<Vec<u64>, RpcError> {
/*
TODO: Use these when a suitable epee serde lib exists
#[derive(Serialize, Debug)]
struct Request {
txid: [u8; 32],
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct OIndexes {
o_indexes: Vec<u64>,
}
*/
// Given the immaturity of Rust epee libraries, this is a homegrown one which is only validated
// to work against this specific function
// Header for EPEE, an 8-byte magic and a version
const EPEE_HEADER: &[u8] = b"\x01\x11\x01\x01\x01\x01\x02\x01\x01";
let mut request = EPEE_HEADER.to_vec();
// Number of fields (shifted over 2 bits as the 2 LSBs are reserved for metadata)
request.push(1 << 2);
// Length of field name
request.push(4);
// Field name
request.extend(b"txid");
// Type of field
request.push(10);
// Length of string, since this byte array is technically a string
request.push(32 << 2);
// The "string"
request.extend(hash);
let indexes_buf = self.bin_call("get_o_indexes.bin", request).await?;
let mut indexes: &[u8] = indexes_buf.as_ref();
(|| {
if read_bytes::<_, { EPEE_HEADER.len() }>(&mut indexes)? != EPEE_HEADER {
Err(io::Error::new(io::ErrorKind::Other, "invalid header"))?;
}
let read_object = |reader: &mut &[u8]| {
let fields = read_byte(reader)? >> 2;
for _ in 0 .. fields {
let name_len = read_byte(reader)?;
let name = read_raw_vec(read_byte, name_len.into(), reader)?;
let type_with_array_flag = read_byte(reader)?;
let kind = type_with_array_flag & (!0x80);
let iters = if type_with_array_flag != kind { read_epee_vi(reader)? } else { 1 };
if (&name == b"o_indexes") && (kind != 5) {
Err(io::Error::new(io::ErrorKind::Other, "o_indexes weren't u64s"))?;
}
let f = match kind {
// i64
1 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// i32
2 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// i16
3 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// i8
4 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// u64
5 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// u32
6 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// u16
7 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// u8
8 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// double
9 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// string, or any collection of bytes
10 => |reader: &mut &[u8]| {
let len = read_epee_vi(reader)?;
read_raw_vec(
read_byte,
len
.try_into()
.map_err(|_| io::Error::new(io::ErrorKind::Other, "u64 length exceeded usize"))?,
reader,
)
},
// bool
11 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// object, errors here as it shouldn't be used on this call
12 => |_: &mut &[u8]| {
Err(io::Error::new(
io::ErrorKind::Other,
"node used object in reply to get_o_indexes",
))
},
// array, so far unused
13 => |_: &mut &[u8]| {
Err(io::Error::new(io::ErrorKind::Other, "node used the unused array type"))
},
_ => {
|_: &mut &[u8]| Err(io::Error::new(io::ErrorKind::Other, "node used an invalid type"))
}
};
let mut res = vec![];
for _ in 0 .. iters {
res.push(f(reader)?);
}
let mut actual_res = Vec::with_capacity(res.len());
if &name == b"o_indexes" {
for o_index in res {
actual_res.push(u64::from_le_bytes(o_index.try_into().map_err(|_| {
io::Error::new(io::ErrorKind::Other, "node didn't provide 8 bytes for a u64")
})?));
}
return Ok(actual_res);
}
}
// Didn't return a response with o_indexes
// TODO: Check if this didn't have o_indexes because it's an error response
Err(io::Error::new(io::ErrorKind::Other, "response didn't contain o_indexes"))
};
read_object(&mut indexes)
})()
.map_err(|_| RpcError::InvalidNode)
}
/// Get the output distribution, from the specified height to the specified height (both
/// inclusive).
pub async fn get_output_distribution(
&self,
from: usize,
to: usize,
) -> Result<Vec<u64>, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distribution {
distribution: Vec<u64>,
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distributions {
distributions: Vec<Distribution>,
}
let mut distributions: Distributions = self
.json_rpc_call(
"get_output_distribution",
Some(json!({
"binary": false,
"amounts": [0],
"cumulative": true,
"from_height": from,
"to_height": to,
})),
)
.await?;
Ok(distributions.distributions.swap_remove(0).distribution)
}
/// Get the specified outputs from the RingCT (zero-amount) pool, but only return them if their
/// timelock has been satisfied. This is distinct from being free of the 10-block lock applied to
/// all Monero transactions.
pub async fn get_unlocked_outputs(
&self,
indexes: &[u64],
height: usize,
) -> Result<Vec<Option<[EdwardsPoint; 2]>>, RpcError> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
txid: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = self
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": 0,
"index": o
})).collect::<Vec<_>>()
})),
)
.await?;
let txs = self
.get_transactions(
&outs
.outs
.iter()
.map(|out| rpc_hex(&out.txid)?.try_into().map_err(|_| RpcError::InvalidNode))
.collect::<Result<Vec<_>, _>>()?,
)
.await?;
// TODO: https://github.com/serai-dex/serai/issues/104
outs
.outs
.iter()
.enumerate()
.map(|(i, out)| {
Ok(
Some([rpc_point(&out.key)?, rpc_point(&out.mask)?])
.filter(|_| Timelock::Block(height) >= txs[i].prefix.timelock),
)
})
.collect()
}
/// Get the currently estimated fee from the node. This may be manipulated to unsafe levels and
/// MUST be sanity checked.
// TODO: Take a sanity check argument
pub async fn get_fee(&self) -> Result<Fee, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct FeeResponse {
fee: u64,
quantization_mask: u64,
}
let res: FeeResponse = self.json_rpc_call("get_fee_estimate", None).await?;
Ok(Fee { per_weight: res.fee, mask: res.quantization_mask })
}
pub async fn publish_transaction(&self, tx: &Transaction) -> Result<(), RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SendRawResponse {
status: String,
double_spend: bool,
fee_too_low: bool,
invalid_input: bool,
invalid_output: bool,
low_mixin: bool,
not_relayed: bool,
overspend: bool,
too_big: bool,
too_few_outputs: bool,
reason: String,
}
let res: SendRawResponse = self
.rpc_call("send_raw_transaction", Some(json!({ "tx_as_hex": hex::encode(tx.serialize()) })))
.await?;
if res.status != "OK" {
Err(RpcError::InvalidTransaction(tx.hash()))?;
}
Ok(())
}
pub async fn generate_blocks(&self, address: &str, block_count: usize) -> Result<(), RpcError> {
self
.rpc_call::<_, EmptyResponse>(
"json_rpc",
Some(json!({
"method": "generateblocks",
"params": {
"wallet_address": address,
"amount_of_blocks": block_count
},
})),
)
.await?;
Ok(())
}
}

View File

@@ -1,4 +1,8 @@
use std::io::{self, Read, Write};
use core::fmt::Debug;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use curve25519_dalek::{
scalar::Scalar,
@@ -67,14 +71,18 @@ pub(crate) fn read_byte<R: Read>(r: &mut R) -> io::Result<u8> {
Ok(read_bytes::<_, 1>(r)?[0])
}
pub(crate) fn read_u64<R: Read>(r: &mut R) -> io::Result<u64> {
read_bytes(r).map(u64::from_le_bytes)
pub(crate) fn read_u16<R: Read>(r: &mut R) -> io::Result<u16> {
read_bytes(r).map(u16::from_le_bytes)
}
pub(crate) fn read_u32<R: Read>(r: &mut R) -> io::Result<u32> {
read_bytes(r).map(u32::from_le_bytes)
}
pub(crate) fn read_u64<R: Read>(r: &mut R) -> io::Result<u64> {
read_bytes(r).map(u64::from_le_bytes)
}
pub(crate) fn read_varint<R: Read>(r: &mut R) -> io::Result<u64> {
let mut bits = 0;
let mut res = 0;
@@ -117,7 +125,7 @@ pub(crate) fn read_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
pub(crate) fn read_torsion_free_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
read_point(r)
.ok()
.filter(|point| point.is_torsion_free())
.filter(EdwardsPoint::is_torsion_free)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid point"))
}
@@ -133,6 +141,13 @@ pub(crate) fn read_raw_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
Ok(res)
}
pub(crate) fn read_array<R: Read, T: Debug, F: Fn(&mut R) -> io::Result<T>, const N: usize>(
f: F,
r: &mut R,
) -> io::Result<[T; N]> {
read_raw_vec(f, N, r).map(|vec| vec.try_into().unwrap())
}
pub(crate) fn read_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
r: &mut R,

View File

@@ -33,9 +33,9 @@ fn standard_address() {
let addr = MoneroAddress::from_str(Network::Mainnet, STANDARD).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Standard);
assert!(!addr.meta.kind.subaddress());
assert!(!addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), None);
assert!(!addr.meta.kind.guaranteed());
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SPEND);
assert_eq!(addr.view.compress().to_bytes(), VIEW);
assert_eq!(addr.to_string(), STANDARD);
@@ -46,9 +46,9 @@ fn integrated_address() {
let addr = MoneroAddress::from_str(Network::Mainnet, INTEGRATED).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Integrated(PAYMENT_ID));
assert!(!addr.meta.kind.subaddress());
assert!(!addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), Some(PAYMENT_ID));
assert!(!addr.meta.kind.guaranteed());
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SPEND);
assert_eq!(addr.view.compress().to_bytes(), VIEW);
assert_eq!(addr.to_string(), INTEGRATED);
@@ -59,9 +59,9 @@ fn subaddress() {
let addr = MoneroAddress::from_str(Network::Mainnet, SUBADDRESS).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Subaddress);
assert!(addr.meta.kind.subaddress());
assert!(addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), None);
assert!(!addr.meta.kind.guaranteed());
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SUB_SPEND);
assert_eq!(addr.view.compress().to_bytes(), SUB_VIEW);
assert_eq!(addr.to_string(), SUBADDRESS);
@@ -100,9 +100,9 @@ fn featured() {
assert_eq!(addr.spend, spend);
assert_eq!(addr.view, view);
assert_eq!(addr.subaddress(), subaddress);
assert_eq!(addr.is_subaddress(), subaddress);
assert_eq!(addr.payment_id(), payment_id);
assert_eq!(addr.guaranteed(), guaranteed);
assert_eq!(addr.is_guaranteed(), guaranteed);
}
}
}
@@ -151,10 +151,10 @@ fn featured_vectors() {
assert_eq!(addr.spend, spend);
assert_eq!(addr.view, view);
assert_eq!(addr.subaddress(), vector.subaddress);
assert_eq!(addr.is_subaddress(), vector.subaddress);
assert_eq!(vector.integrated, vector.payment_id.is_some());
assert_eq!(addr.payment_id(), vector.payment_id);
assert_eq!(addr.guaranteed(), vector.guaranteed);
assert_eq!(addr.is_guaranteed(), vector.guaranteed);
assert_eq!(
MoneroAddress::new(

View File

@@ -1,5 +1,5 @@
use hex_literal::hex;
use rand::rngs::OsRng;
use rand_core::OsRng;
use curve25519_dalek::{scalar::Scalar, edwards::CompressedEdwardsY};
use multiexp::BatchVerifier;

View File

@@ -1,6 +1,8 @@
use core::ops::Deref;
#[cfg(feature = "multisig")]
use std::sync::{Arc, RwLock};
use std_shims::sync::Arc;
#[cfg(feature = "multisig")]
use std::sync::RwLock;
use zeroize::Zeroizing;
use rand_core::{RngCore, OsRng};
@@ -24,7 +26,10 @@ use crate::{
use crate::ringct::clsag::{ClsagDetails, ClsagMultisig};
#[cfg(feature = "multisig")]
use frost::tests::{key_gen, algorithm_machines, sign};
use frost::{
Participant,
tests::{key_gen, algorithm_machines, sign},
};
const RING_LEN: u64 = 11;
const AMOUNT: u64 = 1337;
@@ -42,13 +47,12 @@ fn clsag() {
for i in 0 .. RING_LEN {
let dest = Zeroizing::new(random_scalar(&mut OsRng));
let mask = random_scalar(&mut OsRng);
let amount;
if i == real {
let amount = if i == real {
secrets = (dest.clone(), mask);
amount = AMOUNT;
AMOUNT
} else {
amount = OsRng.next_u64();
}
OsRng.next_u64()
};
ring
.push([dest.deref() * &ED25519_BASEPOINT_TABLE, Commitment::new(mask, amount).calculate()]);
}
@@ -63,7 +67,7 @@ fn clsag() {
Commitment::new(secrets.1, AMOUNT),
Decoys {
i: u8::try_from(real).unwrap(),
offsets: (1 ..= RING_LEN).into_iter().collect(),
offsets: (1 ..= RING_LEN).collect(),
ring: ring.clone(),
},
)
@@ -87,31 +91,26 @@ fn clsag_multisig() {
for i in 0 .. RING_LEN {
let dest;
let mask;
let amount;
if i != u64::from(RING_INDEX) {
let amount = if i == u64::from(RING_INDEX) {
dest = keys[&Participant::new(1).unwrap()].group_key().0;
mask = randomness;
AMOUNT
} else {
dest = &random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE;
mask = random_scalar(&mut OsRng);
amount = OsRng.next_u64();
} else {
dest = keys[&1].group_key().0;
mask = randomness;
amount = AMOUNT;
}
OsRng.next_u64()
};
ring.push([dest, Commitment::new(mask, amount).calculate()]);
}
let mask_sum = random_scalar(&mut OsRng);
let algorithm = ClsagMultisig::new(
RecommendedTranscript::new(b"Monero Serai CLSAG Test"),
keys[&1].group_key().0,
keys[&Participant::new(1).unwrap()].group_key().0,
Arc::new(RwLock::new(Some(ClsagDetails::new(
ClsagInput::new(
Commitment::new(randomness, AMOUNT),
Decoys {
i: RING_INDEX,
offsets: (1 ..= RING_LEN).into_iter().collect(),
ring: ring.clone(),
},
Decoys { i: RING_INDEX, offsets: (1 ..= RING_LEN).collect(), ring: ring.clone() },
)
.unwrap(),
mask_sum,

View File

@@ -1,3 +1,4 @@
mod clsag;
mod bulletproofs;
mod address;
mod seed;

View File

@@ -0,0 +1,177 @@
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::scalar::Scalar;
use crate::{
hash,
wallet::seed::{Seed, Language, classic::trim_by_lang},
};
#[test]
fn test_classic_seed() {
struct Vector {
language: Language,
seed: String,
spend: String,
view: String,
}
let vectors = [
Vector {
language: Language::Chinese,
seed: "摇 曲 艺 武 滴 然 效 似 赏 式 祥 歌 买 疑 小 碧 堆 博 键 房 鲜 悲 付 喷 武".into(),
spend: "a5e4fff1706ef9212993a69f246f5c95ad6d84371692d63e9bb0ea112a58340d".into(),
view: "1176c43ce541477ea2f3ef0b49b25112b084e26b8a843e1304ac4677b74cdf02".into(),
},
Vector {
language: Language::English,
seed: "washing thirsty occur lectures tuesday fainted toxic adapt \
abnormal memoir nylon mostly building shrugged online ember northern \
ruby woes dauntless boil family illness inroads northern"
.into(),
spend: "c0af65c0dd837e666b9d0dfed62745f4df35aed7ea619b2798a709f0fe545403".into(),
view: "513ba91c538a5a9069e0094de90e927c0cd147fa10428ce3ac1afd49f63e3b01".into(),
},
Vector {
language: Language::Dutch,
seed: "setwinst riphagen vimmetje extase blief tuitelig fuiven meifeest \
ponywagen zesmaal ripdeal matverf codetaal leut ivoor rotten \
wisgerhof winzucht typograaf atrium rein zilt traktaat verzaagd setwinst"
.into(),
spend: "e2d2873085c447c2bc7664222ac8f7d240df3aeac137f5ff2022eaa629e5b10a".into(),
view: "eac30b69477e3f68093d131c7fd961564458401b07f8c87ff8f6030c1a0c7301".into(),
},
Vector {
language: Language::French,
seed: "poids vaseux tarte bazar poivre effet entier nuance \
sensuel ennui pacte osselet poudre battre alibi mouton \
stade paquet pliage gibier type question position projet pliage"
.into(),
spend: "2dd39ff1a4628a94b5c2ec3e42fb3dfe15c2b2f010154dc3b3de6791e805b904".into(),
view: "6725b32230400a1032f31d622b44c3a227f88258939b14a7c72e00939e7bdf0e".into(),
},
Vector {
language: Language::Spanish,
seed: "minero ocupar mirar evadir octubre cal logro miope \
opaco disco ancla litio clase cuello nasal clase \
fiar avance deseo mente grumo negro cordón croqueta clase"
.into(),
spend: "ae2c9bebdddac067d73ec0180147fc92bdf9ac7337f1bcafbbe57dd13558eb02".into(),
view: "18deafb34d55b7a43cae2c1c1c206a3c80c12cc9d1f84640b484b95b7fec3e05".into(),
},
Vector {
language: Language::German,
seed: "Kaliber Gabelung Tapir Liveband Favorit Specht Enklave Nabel \
Jupiter Foliant Chronik nisten löten Vase Aussage Rekord \
Yeti Gesetz Eleganz Alraune Künstler Almweide Jahr Kastanie Almweide"
.into(),
spend: "79801b7a1b9796856e2397d862a113862e1fdc289a205e79d8d70995b276db06".into(),
view: "99f0ec556643bd9c038a4ed86edcb9c6c16032c4622ed2e000299d527a792701".into(),
},
Vector {
language: Language::Italian,
seed: "cavo pancetta auto fulmine alleanza filmato diavolo prato \
forzare meritare litigare lezione segreto evasione votare buio \
licenza cliente dorso natale crescere vento tutelare vetta evasione"
.into(),
spend: "5e7fd774eb00fa5877e2a8b4dc9c7ffe111008a3891220b56a6e49ac816d650a".into(),
view: "698a1dce6018aef5516e82ca0cb3e3ec7778d17dfb41a137567bfa2e55e63a03".into(),
},
Vector {
language: Language::Portuguese,
seed: "agito eventualidade onus itrio holograma sodomizar objetos dobro \
iugoslavo bcrepuscular odalisca abjeto iuane darwinista eczema acetona \
cibernetico hoquei gleba driver buffer azoto megera nogueira agito"
.into(),
spend: "13b3115f37e35c6aa1db97428b897e584698670c1b27854568d678e729200c0f".into(),
view: "ad1b4fd35270f5f36c4da7166672b347e75c3f4d41346ec2a06d1d0193632801".into(),
},
Vector {
language: Language::Japanese,
seed: "ぜんぶ どうぐ おたがい せんきょ おうじ そんちょう じゅしん いろえんぴつ \
かほう つかれる えらぶ にちじょう くのう にちようび ぬまえび さんきゃく \
おおや ちぬき うすめる いがく せつでん さうな すいえい せつだん おおや"
.into(),
spend: "c56e895cdb13007eda8399222974cdbab493640663804b93cbef3d8c3df80b0b".into(),
view: "6c3634a313ec2ee979d565c33888fd7c3502d696ce0134a8bc1a2698c7f2c508".into(),
},
Vector {
language: Language::Russian,
seed: "шатер икра нация ехать получать инерция доза реальный \
рыжий таможня лопата душа веселый клетка атлас лекция \
обгонять паек наивный лыжный дурак стать ежик задача паек"
.into(),
spend: "7cb5492df5eb2db4c84af20766391cd3e3662ab1a241c70fc881f3d02c381f05".into(),
view: "fcd53e41ec0df995ab43927f7c44bc3359c93523d5009fb3f5ba87431d545a03".into(),
},
Vector {
language: Language::Esperanto,
seed: "ukazo klini peco etikedo fabriko imitado onklino urino \
pudro incidento kumuluso ikono smirgi hirundo uretro krii \
sparkado super speciala pupo alpinisto cvana vokegi zombio fabriko"
.into(),
spend: "82ebf0336d3b152701964ed41df6b6e9a035e57fc98b84039ed0bd4611c58904".into(),
view: "cd4d120e1ea34360af528f6a3e6156063312d9cefc9aa6b5218d366c0ed6a201".into(),
},
Vector {
language: Language::Lojban,
seed: "jetnu vensa julne xrotu xamsi julne cutci dakli \
mlatu xedja muvgau palpi xindo sfubu ciste cinri \
blabi darno dembi janli blabi fenki bukpu burcu blabi"
.into(),
spend: "e4f8c6819ab6cf792cebb858caabac9307fd646901d72123e0367ebc0a79c200".into(),
view: "c806ce62bafaa7b2d597f1a1e2dbe4a2f96bfd804bf6f8420fc7f4a6bd700c00".into(),
},
Vector {
language: Language::EnglishOld,
seed: "glorious especially puff son moment add youth nowhere \
throw glide grip wrong rhythm consume very swear \
bitter heavy eventually begin reason flirt type unable"
.into(),
spend: "647f4765b66b636ff07170ab6280a9a6804dfbaf19db2ad37d23be024a18730b".into(),
view: "045da65316a906a8c30046053119c18020b07a7a3a6ef5c01ab2a8755416bd02".into(),
},
];
for vector in vectors {
let trim_seed = |seed: &str| {
seed
.split_whitespace()
.map(|word| trim_by_lang(word, vector.language))
.collect::<Vec<_>>()
.join(" ")
};
// Test against Monero
{
let seed = Seed::from_string(Zeroizing::new(vector.seed.clone())).unwrap();
assert_eq!(seed, Seed::from_string(Zeroizing::new(trim_seed(&vector.seed))).unwrap());
let spend: [u8; 32] = hex::decode(vector.spend).unwrap().try_into().unwrap();
// For classical seeds, Monero directly uses the entropy as a spend key
assert_eq!(
Scalar::from_canonical_bytes(*seed.entropy()),
Scalar::from_canonical_bytes(spend)
);
let view: [u8; 32] = hex::decode(vector.view).unwrap().try_into().unwrap();
// Monero then derives the view key as H(spend)
assert_eq!(
Scalar::from_bytes_mod_order(hash(&spend)),
Scalar::from_canonical_bytes(view).unwrap()
);
assert_eq!(Seed::from_entropy(vector.language, Zeroizing::new(spend)).unwrap(), seed);
}
// Test against ourself
{
let seed = Seed::new(&mut OsRng, vector.language);
assert_eq!(seed, Seed::from_string(Zeroizing::new(trim_seed(&seed.to_string()))).unwrap());
assert_eq!(seed, Seed::from_entropy(vector.language, seed.entropy()).unwrap());
assert_eq!(seed, Seed::from_string(seed.to_string()).unwrap());
}
}
}

View File

@@ -1,5 +1,8 @@
use core::cmp::Ordering;
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
@@ -17,7 +20,7 @@ use crate::{
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Input {
Gen(u64),
ToKey { amount: u64, key_offsets: Vec<u64>, key_image: EdwardsPoint },
ToKey { amount: Option<u64>, key_offsets: Vec<u64>, key_image: EdwardsPoint },
}
impl Input {
@@ -30,28 +33,38 @@ impl Input {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Input::Gen(height) => {
Self::Gen(height) => {
w.write_all(&[255])?;
write_varint(height, w)
}
Input::ToKey { amount, key_offsets, key_image } => {
Self::ToKey { amount, key_offsets, key_image } => {
w.write_all(&[2])?;
write_varint(amount, w)?;
write_varint(&amount.unwrap_or(0), w)?;
write_vec(write_varint, key_offsets, w)?;
write_point(key_image, w)
}
}
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Input> {
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(interpret_as_rct: bool, r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
255 => Input::Gen(read_varint(r)?),
2 => Input::ToKey {
amount: read_varint(r)?,
key_offsets: read_vec(read_varint, r)?,
key_image: read_torsion_free_point(r)?,
},
255 => Self::Gen(read_varint(r)?),
2 => {
let amount = read_varint(r)?;
let amount = if (amount == 0) && interpret_as_rct { None } else { Some(amount) };
Self::ToKey {
amount,
key_offsets: read_vec(read_varint, r)?,
key_image: read_torsion_free_point(r)?,
}
}
_ => {
Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown/unused input type"))?
}
@@ -62,7 +75,7 @@ impl Input {
// Doesn't bother moving to an enum for the unused Script classes
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Output {
pub amount: u64,
pub amount: Option<u64>,
pub key: CompressedEdwardsY,
pub view_tag: Option<u8>,
}
@@ -73,7 +86,7 @@ impl Output {
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.amount, w)?;
write_varint(&self.amount.unwrap_or(0), w)?;
w.write_all(&[2 + u8::from(self.view_tag.is_some())])?;
w.write_all(&self.key.to_bytes())?;
if let Some(view_tag) = self.view_tag {
@@ -82,8 +95,23 @@ impl Output {
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Output> {
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(8 + 1 + 32);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(interpret_as_rct: bool, r: &mut R) -> io::Result<Self> {
let amount = read_varint(r)?;
let amount = if interpret_as_rct {
if amount != 0 {
Err(io::Error::new(io::ErrorKind::Other, "RCT TX output wasn't 0"))?;
}
None
} else {
Some(amount)
};
let view_tag = match read_byte(r)? {
2 => false,
3 => true,
@@ -93,7 +121,7 @@ impl Output {
))?,
};
Ok(Output {
Ok(Self {
amount,
key: CompressedEdwardsY(read_bytes(r)?),
view_tag: if view_tag { Some(read_byte(r)?) } else { None },
@@ -109,22 +137,22 @@ pub enum Timelock {
}
impl Timelock {
fn from_raw(raw: u64) -> Timelock {
fn from_raw(raw: u64) -> Self {
if raw == 0 {
Timelock::None
Self::None
} else if raw < 500_000_000 {
Timelock::Block(usize::try_from(raw).unwrap())
Self::Block(usize::try_from(raw).unwrap())
} else {
Timelock::Time(raw)
Self::Time(raw)
}
}
fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(
&match self {
Timelock::None => 0,
Timelock::Block(block) => (*block).try_into().unwrap(),
Timelock::Time(time) => *time,
Self::None => 0,
Self::Block(block) => (*block).try_into().unwrap(),
Self::Time(time) => *time,
},
w,
)
@@ -134,9 +162,9 @@ impl Timelock {
impl PartialOrd for Timelock {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
match (self, other) {
(Timelock::None, _) => Some(Ordering::Less),
(Timelock::Block(a), Timelock::Block(b)) => a.partial_cmp(b),
(Timelock::Time(a), Timelock::Time(b)) => a.partial_cmp(b),
(Self::None, _) => Some(Ordering::Less),
(Self::Block(a), Self::Block(b)) => a.partial_cmp(b),
(Self::Time(a), Self::Time(b)) => a.partial_cmp(b),
_ => None,
}
}
@@ -172,24 +200,48 @@ impl TransactionPrefix {
w.write_all(&self.extra)
}
pub fn read<R: Read>(r: &mut R) -> io::Result<TransactionPrefix> {
let mut prefix = TransactionPrefix {
version: read_varint(r)?,
timelock: Timelock::from_raw(read_varint(r)?),
inputs: read_vec(Input::read, r)?,
outputs: read_vec(Output::read, r)?,
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
let version = read_varint(r)?;
// TODO: Create an enum out of version
if (version == 0) || (version > 2) {
Err(io::Error::new(io::ErrorKind::Other, "unrecognized transaction version"))?;
}
let timelock = Timelock::from_raw(read_varint(r)?);
let inputs = read_vec(|r| Input::read(version == 2, r), r)?;
if inputs.is_empty() {
Err(io::Error::new(io::ErrorKind::Other, "transaction had no inputs"))?;
}
let is_miner_tx = matches!(inputs[0], Input::Gen { .. });
let mut prefix = Self {
version,
timelock,
inputs,
outputs: read_vec(|r| Output::read((!is_miner_tx) && (version == 2), r), r)?,
extra: vec![],
};
prefix.extra = read_vec(read_byte, r)?;
Ok(prefix)
}
pub fn hash(&self) -> [u8; 32] {
hash(&self.serialize())
}
}
/// Monero transaction. For version 1, rct_signatures still contains an accurate fee value.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Transaction {
pub prefix: TransactionPrefix,
pub signatures: Vec<(Scalar, Scalar)>,
pub signatures: Vec<Vec<(Scalar, Scalar)>>,
pub rct_signatures: RctSignatures,
}
@@ -207,9 +259,11 @@ impl Transaction {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.prefix.write(w)?;
if self.prefix.version == 1 {
for sig in &self.signatures {
write_scalar(&sig.0, w)?;
write_scalar(&sig.1, w)?;
for sigs in &self.signatures {
for sig in sigs {
write_scalar(&sig.0, w)?;
write_scalar(&sig.1, w)?;
}
}
Ok(())
} else if self.prefix.version == 2 {
@@ -219,27 +273,44 @@ impl Transaction {
}
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Transaction> {
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(2048);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
let prefix = TransactionPrefix::read(r)?;
let mut signatures = vec![];
let mut rct_signatures = RctSignatures {
base: RctBase { fee: 0, ecdh_info: vec![], commitments: vec![] },
base: RctBase { fee: 0, encrypted_amounts: vec![], pseudo_outs: vec![], commitments: vec![] },
prunable: RctPrunable::Null,
};
if prefix.version == 1 {
for _ in 0 .. prefix.inputs.len() {
signatures.push((read_scalar(r)?, read_scalar(r)?));
}
signatures = prefix
.inputs
.iter()
.filter_map(|input| match input {
Input::ToKey { key_offsets, .. } => Some(
key_offsets
.iter()
.map(|_| Ok((read_scalar(r)?, read_scalar(r)?)))
.collect::<Result<_, io::Error>>(),
),
_ => None,
})
.collect::<Result<_, _>>()?;
rct_signatures.base.fee = prefix
.inputs
.iter()
.map(|input| match input {
Input::Gen(..) => 0,
Input::ToKey { amount, .. } => *amount,
Input::ToKey { amount, .. } => amount.unwrap(),
})
.sum::<u64>()
.saturating_sub(prefix.outputs.iter().map(|output| output.amount).sum());
.saturating_sub(prefix.outputs.iter().map(|output| output.amount.unwrap()).sum());
} else if prefix.version == 2 {
rct_signatures = RctSignatures::read(
prefix
@@ -257,7 +328,7 @@ impl Transaction {
Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown version"))?;
}
Ok(Transaction { prefix, signatures, rct_signatures })
Ok(Self { prefix, signatures, rct_signatures })
}
pub fn hash(&self) -> [u8; 32] {
@@ -268,22 +339,21 @@ impl Transaction {
} else {
let mut hashes = Vec::with_capacity(96);
self.prefix.write(&mut buf).unwrap();
hashes.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hashes.extend(hash(&buf));
buf.clear();
self.rct_signatures.base.write(&mut buf, self.rct_signatures.prunable.rct_type()).unwrap();
hashes.extend(hash(&buf));
buf.clear();
match self.rct_signatures.prunable {
RctPrunable::Null => buf.resize(32, 0),
_ => {
self.rct_signatures.prunable.write(&mut buf).unwrap();
buf = hash(&buf).to_vec();
hashes.extend(&match self.rct_signatures.prunable {
RctPrunable::Null => [0; 32],
RctPrunable::MlsagBorromean { .. } |
RctPrunable::MlsagBulletproofs { .. } |
RctPrunable::Clsag { .. } => {
self.rct_signatures.prunable.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hash(&buf)
}
}
hashes.extend(&buf);
});
hash(&hashes)
}
@@ -294,11 +364,9 @@ impl Transaction {
let mut buf = Vec::with_capacity(2048);
let mut sig_hash = Vec::with_capacity(96);
self.prefix.write(&mut buf).unwrap();
sig_hash.extend(hash(&buf));
buf.clear();
sig_hash.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.prunable.rct_type()).unwrap();
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
sig_hash.extend(hash(&buf));
buf.clear();

View File

@@ -1,7 +1,5 @@
use core::{marker::PhantomData, fmt::Debug};
use std::string::ToString;
use thiserror::Error;
use std_shims::string::{String, ToString};
use zeroize::Zeroize;
@@ -34,18 +32,18 @@ pub struct SubaddressIndex {
}
impl SubaddressIndex {
pub const fn new(account: u32, address: u32) -> Option<SubaddressIndex> {
pub const fn new(account: u32, address: u32) -> Option<Self> {
if (account == 0) && (address == 0) {
return None;
}
Some(SubaddressIndex { account, address })
Some(Self { account, address })
}
pub fn account(&self) -> u32 {
pub const fn account(&self) -> u32 {
self.account
}
pub fn address(&self) -> u32 {
pub const fn address(&self) -> u32 {
self.address
}
}
@@ -60,23 +58,22 @@ pub enum AddressSpec {
}
impl AddressType {
pub fn subaddress(&self) -> bool {
matches!(self, AddressType::Subaddress) ||
matches!(self, AddressType::Featured { subaddress: true, .. })
pub const fn is_subaddress(&self) -> bool {
matches!(self, Self::Subaddress) || matches!(self, Self::Featured { subaddress: true, .. })
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
if let AddressType::Integrated(id) = self {
pub const fn payment_id(&self) -> Option<[u8; 8]> {
if let Self::Integrated(id) = self {
Some(*id)
} else if let AddressType::Featured { payment_id, .. } = self {
} else if let Self::Featured { payment_id, .. } = self {
*payment_id
} else {
None
}
}
pub fn guaranteed(&self) -> bool {
matches!(self, AddressType::Featured { guaranteed: true, .. })
pub const fn is_guaranteed(&self) -> bool {
matches!(self, Self::Featured { guaranteed: true, .. })
}
}
@@ -114,19 +111,20 @@ impl<B: AddressBytes> Zeroize for AddressMeta<B> {
}
/// Error when decoding an address.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)]
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum AddressError {
#[error("invalid address byte")]
#[cfg_attr(feature = "std", error("invalid address byte"))]
InvalidByte,
#[error("invalid address encoding")]
#[cfg_attr(feature = "std", error("invalid address encoding"))]
InvalidEncoding,
#[error("invalid length")]
#[cfg_attr(feature = "std", error("invalid length"))]
InvalidLength,
#[error("invalid key")]
#[cfg_attr(feature = "std", error("invalid key"))]
InvalidKey,
#[error("unknown features")]
#[cfg_attr(feature = "std", error("unknown features"))]
UnknownFeatures,
#[error("different network than expected")]
#[cfg_attr(feature = "std", error("different network than expected"))]
DifferentNetwork,
}
@@ -143,8 +141,8 @@ impl<B: AddressBytes> AddressMeta<B> {
}
/// Create an address's metadata.
pub fn new(network: Network, kind: AddressType) -> Self {
AddressMeta { _bytes: PhantomData, network, kind }
pub const fn new(network: Network, kind: AddressType) -> Self {
Self { _bytes: PhantomData, network, kind }
}
// Returns an incomplete instantiation in the case of Integrated/Featured addresses
@@ -161,7 +159,7 @@ impl<B: AddressBytes> AddressMeta<B> {
}
_ => None,
} {
meta = Some(AddressMeta::new(network, kind));
meta = Some(Self::new(network, kind));
break;
}
}
@@ -169,16 +167,16 @@ impl<B: AddressBytes> AddressMeta<B> {
meta.ok_or(AddressError::InvalidByte)
}
pub fn subaddress(&self) -> bool {
self.kind.subaddress()
pub const fn is_subaddress(&self) -> bool {
self.kind.is_subaddress()
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
pub const fn payment_id(&self) -> Option<[u8; 8]> {
self.kind.payment_id()
}
pub fn guaranteed(&self) -> bool {
self.kind.guaranteed()
pub const fn is_guaranteed(&self) -> bool {
self.kind.is_guaranteed()
}
}
@@ -217,8 +215,8 @@ impl<B: AddressBytes> ToString for Address<B> {
}
impl<B: AddressBytes> Address<B> {
pub fn new(meta: AddressMeta<B>, spend: EdwardsPoint, view: EdwardsPoint) -> Self {
Address { meta, spend, view }
pub const fn new(meta: AddressMeta<B>, spend: EdwardsPoint, view: EdwardsPoint) -> Self {
Self { meta, spend, view }
}
pub fn from_str_raw(s: &str) -> Result<Self, AddressError> {
@@ -268,7 +266,7 @@ impl<B: AddressBytes> Address<B> {
id.copy_from_slice(&raw[(read - 8) .. read]);
}
Ok(Address { meta, spend, view })
Ok(Self { meta, spend, view })
}
pub fn from_str(network: Network, s: &str) -> Result<Self, AddressError> {
@@ -281,30 +279,30 @@ impl<B: AddressBytes> Address<B> {
})
}
pub fn network(&self) -> Network {
pub const fn network(&self) -> Network {
self.meta.network
}
pub fn subaddress(&self) -> bool {
self.meta.subaddress()
pub const fn is_subaddress(&self) -> bool {
self.meta.is_subaddress()
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
pub const fn payment_id(&self) -> Option<[u8; 8]> {
self.meta.payment_id()
}
pub fn guaranteed(&self) -> bool {
self.meta.guaranteed()
pub const fn is_guaranteed(&self) -> bool {
self.meta.is_guaranteed()
}
}
/// Instantiation of the Address type with Monero's network bytes.
pub type MoneroAddress = Address<MoneroAddressBytes>;
// Allow re-interpreting of an arbitrary address as a monero address so it can be used with the
// Allow re-interpreting of an arbitrary address as a Monero address so it can be used with the
// rest of this library. Doesn't use From as it was conflicting with From<T> for T.
impl MoneroAddress {
pub fn from<B: AddressBytes>(address: Address<B>) -> MoneroAddress {
MoneroAddress::new(
pub const fn from<B: AddressBytes>(address: Address<B>) -> Self {
Self::new(
AddressMeta::new(address.meta.network, address.meta.kind),
address.spend,
address.view,

View File

@@ -1,17 +1,22 @@
use std::{sync::Mutex, collections::HashSet};
use std_shims::{sync::OnceLock, vec::Vec, collections::HashSet};
use lazy_static::lazy_static;
#[cfg(not(feature = "std"))]
use std_shims::sync::Mutex;
#[cfg(feature = "std")]
use futures::lock::Mutex;
use zeroize::{Zeroize, ZeroizeOnDrop};
use rand_core::{RngCore, CryptoRng};
use rand_distr::{Distribution, Gamma};
use zeroize::{Zeroize, ZeroizeOnDrop};
#[cfg(not(feature = "std"))]
use rand_distr::num_traits::Float;
use curve25519_dalek::edwards::EdwardsPoint;
use crate::{
wallet::SpendableOutput,
rpc::{RpcError, Rpc},
rpc::{RpcError, RpcConnection, Rpc},
};
const LOCK_WINDOW: usize = 10;
@@ -19,17 +24,22 @@ const MATURITY: u64 = 60;
const RECENT_WINDOW: usize = 15;
const BLOCK_TIME: usize = 120;
const BLOCKS_PER_YEAR: usize = 365 * 24 * 60 * 60 / BLOCK_TIME;
#[allow(clippy::as_conversions)]
const TIP_APPLICATION: f64 = (LOCK_WINDOW * BLOCK_TIME) as f64;
lazy_static! {
static ref GAMMA: Gamma<f64> = Gamma::new(19.28, 1.0 / 1.61).unwrap();
static ref DISTRIBUTION: Mutex<Vec<u64>> = Mutex::new(Vec::with_capacity(3000000));
// TODO: Expose an API to reset this in case a reorg occurs/the RPC fails/returns garbage
// TODO: Update this when scanning a block, as possible
static DISTRIBUTION_CELL: OnceLock<Mutex<Vec<u64>>> = OnceLock::new();
#[allow(non_snake_case)]
fn DISTRIBUTION() -> &'static Mutex<Vec<u64>> {
DISTRIBUTION_CELL.get_or_init(|| Mutex::new(Vec::with_capacity(3_000_000)))
}
#[allow(clippy::too_many_arguments)]
async fn select_n<R: RngCore + CryptoRng>(
#[allow(clippy::too_many_arguments, clippy::as_conversions)]
async fn select_n<'a, R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc,
rpc: &Rpc<RPC>,
distribution: &[u64],
height: usize,
high: u64,
per_second: f64,
@@ -37,6 +47,12 @@ async fn select_n<R: RngCore + CryptoRng>(
used: &mut HashSet<u64>,
count: usize,
) -> Result<Vec<(u64, [EdwardsPoint; 2])>, RpcError> {
if height >= rpc.get_height().await? {
// TODO: Don't use InternalError for the caller's failure
Err(RpcError::InternalError("decoys being requested from too young blocks"))?;
}
#[cfg(test)]
let mut iters = 0;
let mut confirmed = Vec::with_capacity(count);
// Retries on failure. Retries are obvious as decoys, yet should be minimal
@@ -44,14 +60,17 @@ async fn select_n<R: RngCore + CryptoRng>(
let remaining = count - confirmed.len();
let mut candidates = Vec::with_capacity(remaining);
while candidates.len() != remaining {
iters += 1;
// This is cheap and on fresh chains, thousands of rounds may be needed
if iters == 10000 {
Err(RpcError::InternalError("not enough decoy candidates"))?;
#[cfg(test)]
{
iters += 1;
// This is cheap and on fresh chains, a lot of rounds may be needed
if iters == 100 {
Err(RpcError::InternalError("hit decoy selection round limit"))?;
}
}
// Use a gamma distribution
let mut age = GAMMA.sample(rng).exp();
let mut age = Gamma::<f64>::new(19.28, 1.0 / 1.61).unwrap().sample(rng).exp();
if age > TIP_APPLICATION {
age -= TIP_APPLICATION;
} else {
@@ -61,7 +80,6 @@ async fn select_n<R: RngCore + CryptoRng>(
let o = (age * per_second) as u64;
if o < high {
let distribution = DISTRIBUTION.lock().unwrap();
let i = distribution.partition_point(|s| *s < (high - 1 - o));
let prev = i.saturating_sub(1);
let n = distribution[i] - distribution[prev];
@@ -129,13 +147,19 @@ impl Decoys {
}
/// Select decoys using the same distribution as Monero.
pub async fn select<R: RngCore + CryptoRng>(
#[allow(clippy::as_conversions)]
pub async fn select<R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc,
rpc: &Rpc<RPC>,
ring_len: usize,
height: usize,
inputs: &[SpendableOutput],
) -> Result<Vec<Decoys>, RpcError> {
) -> Result<Vec<Self>, RpcError> {
#[cfg(not(feature = "std"))]
let mut distribution = DISTRIBUTION().lock();
#[cfg(feature = "std")]
let mut distribution = DISTRIBUTION().lock().await;
let decoy_count = ring_len - 1;
// Convert the inputs in question to the raw output data
@@ -146,29 +170,19 @@ impl Decoys {
outputs.push((real[real.len() - 1], [input.key(), input.commitment().calculate()]));
}
let distribution_len = {
let distribution = DISTRIBUTION.lock().unwrap();
distribution.len()
};
if distribution_len <= height {
let extension = rpc.get_output_distribution(distribution_len, height).await?;
DISTRIBUTION.lock().unwrap().extend(extension);
if distribution.len() <= height {
let extension = rpc.get_output_distribution(distribution.len(), height).await?;
distribution.extend(extension);
}
// If asked to use an older height than previously asked, truncate to ensure accuracy
// Should never happen, yet risks desyncing if it did
distribution.truncate(height + 1); // height is inclusive, and 0 is a valid height
let high;
let per_second;
{
let mut distribution = DISTRIBUTION.lock().unwrap();
// If asked to use an older height than previously asked, truncate to ensure accuracy
// Should never happen, yet risks desyncing if it did
distribution.truncate(height + 1); // height is inclusive, and 0 is a valid height
high = distribution[distribution.len() - 1];
per_second = {
let blocks = distribution.len().min(BLOCKS_PER_YEAR);
let outputs = high - distribution[distribution.len().saturating_sub(blocks + 1)];
(outputs as f64) / ((blocks * BLOCK_TIME) as f64)
};
let high = distribution[distribution.len() - 1];
let per_second = {
let blocks = distribution.len().min(BLOCKS_PER_YEAR);
let outputs = high - distribution[distribution.len().saturating_sub(blocks + 1)];
(outputs as f64) / ((blocks * BLOCK_TIME) as f64)
};
let mut used = HashSet::<u64>::new();
@@ -176,7 +190,7 @@ impl Decoys {
used.insert(o.0);
}
// TODO: Simply create a TX with less than the target amount
// TODO: Create a TX with less than the target amount, as allowed by the protocol
if (high - MATURITY) < u64::try_from(inputs.len() * ring_len).unwrap() {
Err(RpcError::InternalError("not enough decoy candidates"))?;
}
@@ -184,9 +198,18 @@ impl Decoys {
// Select all decoys for this transaction, assuming we generate a sane transaction
// We should almost never naturally generate an insane transaction, hence why this doesn't
// bother with an overage
let mut decoys =
select_n(rng, rpc, height, high, per_second, &real, &mut used, inputs.len() * decoy_count)
.await?;
let mut decoys = select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&real,
&mut used,
inputs.len() * decoy_count,
)
.await?;
real.zeroize();
let mut res = Vec::with_capacity(inputs.len());
@@ -209,6 +232,7 @@ impl Decoys {
let target_median = high * 3 / 5;
while ring[ring_len / 2].0 < target_median {
// If it's not, update the bottom half with new values to ensure the median only moves up
#[allow(clippy::needless_collect)] // Needed for ownership reasons
for removed in ring.drain(0 .. (ring_len / 2)).collect::<Vec<_>>() {
// If we removed the real spend, add it back
if removed.0 == o.0 {
@@ -224,8 +248,18 @@ impl Decoys {
// Select new outputs until we have a full sized ring again
ring.extend(
select_n(rng, rpc, height, high, per_second, &[], &mut used, ring_len - ring.len())
.await?,
select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&[],
&mut used,
ring_len - ring.len(),
)
.await?,
);
ring.sort_by(|a, b| a.0.cmp(&b.0));
}
@@ -234,7 +268,7 @@ impl Decoys {
// members
}
res.push(Decoys {
res.push(Self {
// Binary searches for the real spend since we don't know where it sorted to
i: u8::try_from(ring.partition_point(|x| x.0 < o.0)).unwrap(),
offsets: offset(&ring.iter().map(|output| output.0).collect::<Vec<_>>()),

View File

@@ -1,5 +1,8 @@
use core::ops::BitXor;
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
@@ -12,44 +15,53 @@ use crate::serialize::{
pub const MAX_TX_EXTRA_NONCE_SIZE: usize = 255;
pub const PAYMENT_ID_MARKER: u8 = 0;
pub const ENCRYPTED_PAYMENT_ID_MARKER: u8 = 1;
// Used as it's the highest value not interpretable as a continued VarInt
pub const ARBITRARY_DATA_MARKER: u8 = 127;
// 1 byte is used for the marker
pub const MAX_ARBITRARY_DATA_SIZE: usize = MAX_TX_EXTRA_NONCE_SIZE - 1;
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub(crate) enum PaymentId {
pub enum PaymentId {
Unencrypted([u8; 32]),
Encrypted([u8; 8]),
}
impl BitXor<[u8; 8]> for PaymentId {
type Output = PaymentId;
type Output = Self;
fn bitxor(self, bytes: [u8; 8]) -> PaymentId {
fn bitxor(self, bytes: [u8; 8]) -> Self {
match self {
PaymentId::Unencrypted(_) => self,
PaymentId::Encrypted(id) => {
PaymentId::Encrypted((u64::from_le_bytes(id) ^ u64::from_le_bytes(bytes)).to_le_bytes())
// Don't perform the xor since this isn't intended to be encrypted with xor
Self::Unencrypted(_) => self,
Self::Encrypted(id) => {
Self::Encrypted((u64::from_le_bytes(id) ^ u64::from_le_bytes(bytes)).to_le_bytes())
}
}
}
}
impl PaymentId {
pub(crate) fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
PaymentId::Unencrypted(id) => {
w.write_all(&[0])?;
Self::Unencrypted(id) => {
w.write_all(&[PAYMENT_ID_MARKER])?;
w.write_all(id)?;
}
PaymentId::Encrypted(id) => {
w.write_all(&[1])?;
Self::Encrypted(id) => {
w.write_all(&[ENCRYPTED_PAYMENT_ID_MARKER])?;
w.write_all(id)?;
}
}
Ok(())
}
fn read<R: Read>(r: &mut R) -> io::Result<PaymentId> {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
0 => PaymentId::Unencrypted(read_bytes(r)?),
1 => PaymentId::Encrypted(read_bytes(r)?),
0 => Self::Unencrypted(read_bytes(r)?),
1 => Self::Encrypted(read_bytes(r)?),
_ => Err(io::Error::new(io::ErrorKind::Other, "unknown payment ID type"))?,
})
}
@@ -57,7 +69,7 @@ impl PaymentId {
// Doesn't bother with padding nor MinerGate
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) enum ExtraField {
pub enum ExtraField {
PublicKey(EdwardsPoint),
Nonce(Vec<u8>),
MergeMining(usize, [u8; 32]),
@@ -65,22 +77,22 @@ pub(crate) enum ExtraField {
}
impl ExtraField {
fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
ExtraField::PublicKey(key) => {
Self::PublicKey(key) => {
w.write_all(&[1])?;
w.write_all(&key.compress().to_bytes())?;
}
ExtraField::Nonce(data) => {
Self::Nonce(data) => {
w.write_all(&[2])?;
write_vec(write_byte, data, w)?;
}
ExtraField::MergeMining(height, merkle) => {
Self::MergeMining(height, merkle) => {
w.write_all(&[3])?;
write_varint(&u64::try_from(*height).unwrap(), w)?;
w.write_all(merkle)?;
}
ExtraField::PublicKeys(keys) => {
Self::PublicKeys(keys) => {
w.write_all(&[4])?;
write_vec(write_point, keys, w)?;
}
@@ -88,43 +100,47 @@ impl ExtraField {
Ok(())
}
fn read<R: Read>(r: &mut R) -> io::Result<ExtraField> {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(match read_byte(r)? {
1 => ExtraField::PublicKey(read_point(r)?),
2 => ExtraField::Nonce({
1 => Self::PublicKey(read_point(r)?),
2 => Self::Nonce({
let nonce = read_vec(read_byte, r)?;
if nonce.len() > MAX_TX_EXTRA_NONCE_SIZE {
Err(io::Error::new(io::ErrorKind::Other, "too long nonce"))?;
}
nonce
}),
3 => ExtraField::MergeMining(
3 => Self::MergeMining(
usize::try_from(read_varint(r)?)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "varint for height exceeds usize"))?,
read_bytes(r)?,
),
4 => ExtraField::PublicKeys(read_vec(read_point, r)?),
4 => Self::PublicKeys(read_vec(read_point, r)?),
_ => Err(io::Error::new(io::ErrorKind::Other, "unknown extra field"))?,
})
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct Extra(Vec<ExtraField>);
pub struct Extra(Vec<ExtraField>);
impl Extra {
pub(crate) fn keys(&self) -> Vec<EdwardsPoint> {
let mut keys = Vec::with_capacity(2);
pub fn keys(&self) -> Option<(EdwardsPoint, Option<Vec<EdwardsPoint>>)> {
let mut key = None;
let mut additional = None;
for field in &self.0 {
match field.clone() {
ExtraField::PublicKey(key) => keys.push(key),
ExtraField::PublicKeys(additional) => keys.extend(additional),
_ => (),
ExtraField::PublicKey(this_key) => key = key.or(Some(this_key)),
ExtraField::PublicKeys(these_additional) => {
additional = additional.or(Some(these_additional));
}
ExtraField::Nonce(_) | ExtraField::MergeMining(..) => (),
}
}
keys
// Don't return any keys if this was non-standard and didn't include the primary key
key.map(|key| (key, additional))
}
pub(crate) fn payment_id(&self) -> Option<PaymentId> {
pub fn payment_id(&self) -> Option<PaymentId> {
for field in &self.0 {
if let ExtraField::Nonce(data) = field {
return PaymentId::read::<&[u8]>(&mut data.as_ref()).ok();
@@ -133,29 +149,23 @@ impl Extra {
None
}
pub(crate) fn data(&self) -> Vec<Vec<u8>> {
let mut first = true;
pub fn data(&self) -> Vec<Vec<u8>> {
let mut res = vec![];
for field in &self.0 {
if let ExtraField::Nonce(data) = field {
// Skip the first Nonce, which should be the payment ID
if first {
first = false;
continue;
if data[0] == ARBITRARY_DATA_MARKER {
res.push(data[1 ..].to_vec());
}
res.push(data.clone());
}
}
res
}
pub(crate) fn new(mut keys: Vec<EdwardsPoint>) -> Extra {
let mut res = Extra(Vec::with_capacity(3));
if !keys.is_empty() {
res.push(ExtraField::PublicKey(keys[0]));
}
if keys.len() > 1 {
res.push(ExtraField::PublicKeys(keys.drain(1 ..).collect()));
pub(crate) fn new(key: EdwardsPoint, additional: Vec<EdwardsPoint>) -> Self {
let mut res = Self(Vec::with_capacity(3));
res.push(ExtraField::PublicKey(key));
if !additional.is_empty() {
res.push(ExtraField::PublicKeys(additional));
}
res
}
@@ -165,26 +175,37 @@ impl Extra {
}
#[rustfmt::skip]
pub(crate) fn fee_weight(outputs: usize, data: &[Vec<u8>]) -> usize {
pub(crate) fn fee_weight(
outputs: usize,
additional: bool,
payment_id: bool,
data: &[Vec<u8>]
) -> usize {
// PublicKey, key
(1 + 32) +
// PublicKeys, length, additional keys
(1 + 1 + (outputs.saturating_sub(1) * 32)) +
(if additional { 1 + 1 + (outputs * 32) } else { 0 }) +
// PaymentId (Nonce), length, encrypted, ID
(1 + 1 + 1 + 8) +
// Nonce, length, data (if existent)
data.iter().map(|v| 1 + varint_len(v.len()) + v.len()).sum::<usize>()
(if payment_id { 1 + 1 + 1 + 8 } else { 0 }) +
// Nonce, length, ARBITRARY_DATA_MARKER, data
data.iter().map(|v| 1 + varint_len(1 + v.len()) + 1 + v.len()).sum::<usize>()
}
pub(crate) fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for field in &self.0 {
field.write(w)?;
}
Ok(())
}
pub(crate) fn read<R: Read>(r: &mut R) -> io::Result<Extra> {
let mut res = Extra(vec![]);
pub fn serialize(&self) -> Vec<u8> {
let mut buf = vec![];
self.write(&mut buf).unwrap();
buf
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
let mut res = Self(vec![]);
let mut field;
while {
field = ExtraField::read(r);

View File

@@ -1,5 +1,5 @@
use core::ops::Deref;
use std::collections::{HashSet, HashMap};
use std_shims::collections::{HashSet, HashMap};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
@@ -9,11 +9,16 @@ use curve25519_dalek::{
edwards::{EdwardsPoint, CompressedEdwardsY},
};
use crate::{hash, hash_to_scalar, serialize::write_varint, transaction::Input};
use crate::{
hash, hash_to_scalar, serialize::write_varint, ringct::EncryptedAmount, transaction::Input,
};
mod extra;
pub mod extra;
pub(crate) use extra::{PaymentId, ExtraField, Extra};
/// Seed creation and parsing functionality.
pub mod seed;
/// Address encoding and decoding functionality.
pub mod address;
use address::{Network, AddressType, SubaddressIndex, AddressSpec, AddressMeta, MoneroAddress};
@@ -25,11 +30,15 @@ pub(crate) mod decoys;
pub(crate) use decoys::Decoys;
mod send;
pub use send::{Fee, TransactionError, SignableTransaction, SignableTransactionBuilder};
pub use send::{Fee, TransactionError, Change, SignableTransaction, Eventuality};
#[cfg(feature = "std")]
pub use send::SignableTransactionBuilder;
#[cfg(feature = "multisig")]
pub(crate) use send::InternalPayment;
#[cfg(feature = "multisig")]
pub use send::TransactionMachine;
fn key_image_sort(x: &EdwardsPoint, y: &EdwardsPoint) -> std::cmp::Ordering {
fn key_image_sort(x: &EdwardsPoint, y: &EdwardsPoint) -> core::cmp::Ordering {
x.compress().to_bytes().cmp(&y.compress().to_bytes()).reverse()
}
@@ -54,12 +63,11 @@ pub(crate) fn uniqueness(inputs: &[Input]) -> [u8; 32] {
#[allow(non_snake_case)]
pub(crate) fn shared_key(
uniqueness: Option<[u8; 32]>,
s: &Scalar,
P: &EdwardsPoint,
ecdh: EdwardsPoint,
o: usize,
) -> (u8, Scalar, [u8; 8]) {
// 8Ra
let mut output_derivation = (s * P).mul_by_cofactor().compress().to_bytes().to_vec();
let mut output_derivation = ecdh.mul_by_cofactor().compress().to_bytes().to_vec();
let mut payment_id_xor = [0; 8];
payment_id_xor
@@ -72,7 +80,7 @@ pub(crate) fn shared_key(
// uniqueness ||
let shared_key = if let Some(uniqueness) = uniqueness {
[uniqueness.as_ref(), &output_derivation].concat().to_vec()
[uniqueness.as_ref(), &output_derivation].concat()
} else {
output_derivation
};
@@ -80,20 +88,49 @@ pub(crate) fn shared_key(
(view_tag, hash_to_scalar(&shared_key), payment_id_xor)
}
pub(crate) fn commitment_mask(shared_key: Scalar) -> Scalar {
let mut mask = b"commitment_mask".to_vec();
mask.extend(shared_key.to_bytes());
hash_to_scalar(&mask)
}
pub(crate) fn amount_encryption(amount: u64, key: Scalar) -> [u8; 8] {
let mut amount_mask = b"amount".to_vec();
amount_mask.extend(key.to_bytes());
(amount ^ u64::from_le_bytes(hash(&amount_mask)[.. 8].try_into().unwrap())).to_le_bytes()
}
fn amount_decryption(amount: [u8; 8], key: Scalar) -> u64 {
u64::from_le_bytes(amount_encryption(u64::from_le_bytes(amount), key))
}
// TODO: Move this under EncryptedAmount?
fn amount_decryption(amount: &EncryptedAmount, key: Scalar) -> (Scalar, u64) {
match amount {
EncryptedAmount::Original { mask, amount } => {
#[cfg(feature = "experimental")]
{
let mask_shared_sec = hash(key.as_bytes());
let mask =
Scalar::from_bytes_mod_order(*mask) - Scalar::from_bytes_mod_order(mask_shared_sec);
pub(crate) fn commitment_mask(shared_key: Scalar) -> Scalar {
let mut mask = b"commitment_mask".to_vec();
mask.extend(shared_key.to_bytes());
hash_to_scalar(&mask)
let amount_shared_sec = hash(&mask_shared_sec);
let amount_scalar =
Scalar::from_bytes_mod_order(*amount) - Scalar::from_bytes_mod_order(amount_shared_sec);
// d2b from rctTypes.cpp
let amount = u64::from_le_bytes(amount_scalar.to_bytes()[0 .. 8].try_into().unwrap());
(mask, amount)
}
#[cfg(not(feature = "experimental"))]
{
let _ = mask;
let _ = amount;
todo!("decrypting a legacy monero transaction's amount")
}
}
EncryptedAmount::Compact { amount } => (
commitment_mask(key),
u64::from_le_bytes(amount_encryption(u64::from_le_bytes(*amount), key)),
),
}
}
/// The private view key and public spend key, enabling scanning transactions.
@@ -104,11 +141,11 @@ pub struct ViewPair {
}
impl ViewPair {
pub fn new(spend: EdwardsPoint, view: Zeroizing<Scalar>) -> ViewPair {
ViewPair { spend, view }
pub const fn new(spend: EdwardsPoint, view: Zeroizing<Scalar>) -> Self {
Self { spend, view }
}
pub fn spend(&self) -> EdwardsPoint {
pub const fn spend(&self) -> EdwardsPoint {
self.spend
}
@@ -204,16 +241,19 @@ impl ZeroizeOnDrop for Scanner {}
impl Scanner {
/// Create a Scanner from a ViewPair.
///
/// burning_bug is a HashSet of used keys, intended to prevent key reuse which would burn funds.
///
/// When an output is successfully scanned, the output key MUST be saved to disk.
///
/// When a new scanner is created, ALL saved output keys must be passed in to be secure.
///
/// If None is passed, a modified shared key derivation is used which is immune to the burning
/// bug (specifically the Guaranteed feature from Featured Addresses).
// TODO: Should this take in a DB access handle to ensure output keys are saved?
pub fn from_view(pair: ViewPair, burning_bug: Option<HashSet<CompressedEdwardsY>>) -> Scanner {
pub fn from_view(pair: ViewPair, burning_bug: Option<HashSet<CompressedEdwardsY>>) -> Self {
let mut subaddresses = HashMap::new();
subaddresses.insert(pair.spend.compress(), None);
Scanner { pair, subaddresses, burning_bug }
Self { pair, subaddresses, burning_bug }
}
/// Register a subaddress.

View File

@@ -1,4 +1,8 @@
use std::io::{self, Read, Write};
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::{Zeroize, ZeroizeOnDrop};
@@ -9,10 +13,9 @@ use crate::{
serialize::{read_byte, read_u32, read_u64, read_bytes, read_scalar, read_point, read_raw_vec},
transaction::{Input, Timelock, Transaction},
block::Block,
rpc::{Rpc, RpcError},
rpc::{RpcError, RpcConnection, Rpc},
wallet::{
PaymentId, Extra, address::SubaddressIndex, Scanner, uniqueness, shared_key, amount_decryption,
commitment_mask,
},
};
@@ -35,8 +38,8 @@ impl AbsoluteId {
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<AbsoluteId> {
Ok(AbsoluteId { tx: read_bytes(r)?, o: read_byte(r)? })
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { tx: read_bytes(r)?, o: read_byte(r)? })
}
}
@@ -63,8 +66,8 @@ impl OutputData {
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<OutputData> {
Ok(OutputData {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
key: read_point(r)?,
key_offset: read_scalar(r)?,
commitment: Commitment::new(read_scalar(r)?, read_u64(r)?),
@@ -111,7 +114,8 @@ impl Metadata {
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Metadata> {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
#[allow(clippy::if_then_some_else_none)] // The Result usage makes this invalid
let subaddress = if read_byte(r)? == 1 {
Some(
SubaddressIndex::new(read_u32(r)?, read_u32(r)?)
@@ -121,7 +125,7 @@ impl Metadata {
None
};
Ok(Metadata {
Ok(Self {
subaddress,
payment_id: read_bytes(r)?,
arbitrary_data: {
@@ -173,8 +177,8 @@ impl ReceivedOutput {
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<ReceivedOutput> {
Ok(ReceivedOutput {
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self {
absolute: AbsoluteId::read(r)?,
data: OutputData::read(r)?,
metadata: Metadata::read(r)?,
@@ -194,14 +198,20 @@ pub struct SpendableOutput {
impl SpendableOutput {
/// Update the spendable output's global index. This is intended to be called if a
/// re-organization occurred.
pub async fn refresh_global_index(&mut self, rpc: &Rpc) -> Result<(), RpcError> {
pub async fn refresh_global_index<RPC: RpcConnection>(
&mut self,
rpc: &Rpc<RPC>,
) -> Result<(), RpcError> {
self.global_index =
rpc.get_o_indexes(self.output.absolute.tx).await?[usize::from(self.output.absolute.o)];
Ok(())
}
pub async fn from(rpc: &Rpc, output: ReceivedOutput) -> Result<SpendableOutput, RpcError> {
let mut output = SpendableOutput { output, global_index: 0 };
pub async fn from<RPC: RpcConnection>(
rpc: &Rpc<RPC>,
output: ReceivedOutput,
) -> Result<Self, RpcError> {
let mut output = Self { output, global_index: 0 };
output.refresh_global_index(rpc).await?;
Ok(output)
}
@@ -218,6 +228,10 @@ impl SpendableOutput {
self.output.commitment()
}
pub fn arbitrary_data(&self) -> &[Vec<u8>] {
self.output.arbitrary_data()
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.output.write(w)?;
w.write_all(&self.global_index.to_le_bytes())
@@ -229,8 +243,8 @@ impl SpendableOutput {
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<SpendableOutput> {
Ok(SpendableOutput { output: ReceivedOutput::read(r)?, global_index: read_u64(r)? })
pub fn read<R: Read>(r: &mut R) -> io::Result<Self> {
Ok(Self { output: ReceivedOutput::read(r)?, global_index: read_u64(r)? })
}
}
@@ -250,6 +264,7 @@ impl<O: Clone + Zeroize> Timelocked<O> {
}
/// Return the outputs if they're not timelocked, or an empty vector if they are.
#[must_use]
pub fn not_locked(&self) -> Vec<O> {
if self.0 == Timelock::None {
return self.1.clone();
@@ -258,11 +273,17 @@ impl<O: Clone + Zeroize> Timelocked<O> {
}
/// Returns None if the Timelocks aren't comparable. Returns Some(vec![]) if none are unlocked.
#[must_use]
pub fn unlocked(&self, timelock: Timelock) -> Option<Vec<O>> {
// If the Timelocks are comparable, return the outputs if they're now unlocked
self.0.partial_cmp(&timelock).filter(|_| self.0 <= timelock).map(|_| self.1.clone())
if self.0 <= timelock {
Some(self.1.clone())
} else {
None
}
}
#[must_use]
pub fn ignore_timelock(&self) -> Vec<O> {
self.1.clone()
}
@@ -271,14 +292,19 @@ impl<O: Clone + Zeroize> Timelocked<O> {
impl Scanner {
/// Scan a transaction to discover the received outputs.
pub fn scan_transaction(&mut self, tx: &Transaction) -> Timelocked<ReceivedOutput> {
let extra = Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref());
let keys;
let extra = if let Ok(extra) = extra {
keys = extra.keys();
extra
} else {
// Only scan RCT TXs since we can only spend RCT outputs
if tx.prefix.version != 2 {
return Timelocked(tx.prefix.timelock, vec![]);
}
let Ok(extra) = Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref()) else {
return Timelocked(tx.prefix.timelock, vec![]);
};
let Some((tx_key, additional)) = extra.keys() else {
return Timelocked(tx.prefix.timelock, vec![]);
};
let payment_id = extra.payment_id();
let mut res = vec![];
@@ -296,11 +322,22 @@ impl Scanner {
}
let output_key = output_key.unwrap();
for key in &keys {
for key in [Some(Some(&tx_key)), additional.as_ref().map(|additional| additional.get(o))] {
let Some(Some(key)) = key else {
if key == Some(None) {
// This is non-standard. There were additional keys, yet not one for this output
// https://github.com/monero-project/monero/
// blob/04a1e2875d6e35e27bb21497988a6c822d319c28/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L1062
// TODO: Should this return? Where does Monero set the trap handler for this exception?
continue;
} else {
break;
}
};
let (view_tag, shared_key, payment_id_xor) = shared_key(
if self.burning_bug.is_none() { Some(uniqueness(&tx.prefix.inputs)) } else { None },
&self.pair.view,
key,
self.pair.view.deref() * key,
o,
);
@@ -330,7 +367,7 @@ impl Scanner {
// We will not have a torsioned key in our HashMap of keys, so we wouldn't identify it as
// ours
// If we did though, it'd enable bypassing the included burning bug protection
debug_assert!(output_key.is_torsion_free());
assert!(output_key.is_torsion_free());
let mut key_offset = shared_key;
if let Some(subaddress) = subaddress {
@@ -340,19 +377,19 @@ impl Scanner {
let mut commitment = Commitment::zero();
// Miner transaction
if output.amount != 0 {
commitment.amount = output.amount;
if let Some(amount) = output.amount {
commitment.amount = amount;
// Regular transaction
} else {
let amount = match tx.rct_signatures.base.ecdh_info.get(o) {
Some(amount) => amount_decryption(*amount, shared_key),
let (mask, amount) = match tx.rct_signatures.base.encrypted_amounts.get(o) {
Some(amount) => amount_decryption(amount, shared_key),
// This should never happen, yet it may be possible with miner transactions?
// Using get just decreases the possibility of a panic and lets us move on in that case
None => break,
};
// Rebuild the commitment to verify it
commitment = Commitment::new(commitment_mask(shared_key), amount);
commitment = Commitment::new(mask, amount);
// If this is a malicious commitment, move to the next output
// Any other R value will calculate to a different spend key and are therefore ignorable
if Some(&commitment.calculate()) != tx.rct_signatures.base.commitments.get(o) {
@@ -387,9 +424,9 @@ impl Scanner {
/// transactions is a dead giveaway for which transactions you successfully scanned. This
/// function obtains the output indexes for the miner transaction, incrementing from there
/// instead.
pub async fn scan(
pub async fn scan<RPC: RpcConnection>(
&mut self,
rpc: &Rpc,
rpc: &Rpc<RPC>,
block: &Block,
) -> Result<Vec<Timelocked<SpendableOutput>>, RpcError> {
let mut index = rpc.get_o_indexes(block.miner_tx.hash()).await?[0];
@@ -415,7 +452,7 @@ impl Scanner {
};
let mut res = vec![];
for tx in txs.drain(..) {
for tx in txs {
if let Some(timelock) = map(self.scan_transaction(&tx), index) {
res.push(timelock);
}
@@ -423,10 +460,10 @@ impl Scanner {
tx.prefix
.outputs
.iter()
// Filter to miner TX outputs/0-amount outputs since we're tacking the 0-amount index
// This will fail to scan blocks containing pre-RingCT miner TXs
// Filter to v2 miner TX outputs/RCT outputs since we're tracking the RCT output index
.filter(|output| {
matches!(tx.prefix.inputs.get(0), Some(Input::Gen(..))) || (output.amount == 0)
((tx.prefix.version == 2) && matches!(tx.prefix.inputs.get(0), Some(Input::Gen(..)))) ||
output.amount.is_none()
})
.count(),
)

View File

@@ -0,0 +1,271 @@
use core::ops::Deref;
use std_shims::{
sync::OnceLock,
vec::Vec,
string::{String, ToString},
collections::HashMap,
};
use zeroize::{Zeroize, Zeroizing};
use rand_core::{RngCore, CryptoRng};
use crc::{Crc, CRC_32_ISO_HDLC};
use curve25519_dalek::scalar::Scalar;
use crate::{
random_scalar,
wallet::seed::{SeedError, Language},
};
pub(crate) const CLASSIC_SEED_LENGTH: usize = 24;
pub(crate) const CLASSIC_SEED_LENGTH_WITH_CHECKSUM: usize = 25;
fn trim(word: &str, len: usize) -> Zeroizing<String> {
Zeroizing::new(word.chars().take(len).collect())
}
struct WordList {
word_list: Vec<&'static str>,
word_map: HashMap<&'static str, usize>,
trimmed_word_map: HashMap<String, usize>,
unique_prefix_length: usize,
}
impl WordList {
fn new(word_list: Vec<&'static str>, prefix_length: usize) -> Self {
let mut lang = Self {
word_list,
word_map: HashMap::new(),
trimmed_word_map: HashMap::new(),
unique_prefix_length: prefix_length,
};
for (i, word) in lang.word_list.iter().enumerate() {
lang.word_map.insert(word, i);
lang.trimmed_word_map.insert(trim(word, lang.unique_prefix_length).deref().clone(), i);
}
lang
}
}
static LANGUAGES_CELL: OnceLock<HashMap<Language, WordList>> = OnceLock::new();
#[allow(non_snake_case)]
fn LANGUAGES() -> &'static HashMap<Language, WordList> {
LANGUAGES_CELL.get_or_init(|| {
HashMap::from([
(Language::Chinese, WordList::new(include!("./classic/zh.rs"), 1)),
(Language::English, WordList::new(include!("./classic/en.rs"), 3)),
(Language::Dutch, WordList::new(include!("./classic/nl.rs"), 4)),
(Language::French, WordList::new(include!("./classic/fr.rs"), 4)),
(Language::Spanish, WordList::new(include!("./classic/es.rs"), 4)),
(Language::German, WordList::new(include!("./classic/de.rs"), 4)),
(Language::Italian, WordList::new(include!("./classic/it.rs"), 4)),
(Language::Portuguese, WordList::new(include!("./classic/pt.rs"), 4)),
(Language::Japanese, WordList::new(include!("./classic/ja.rs"), 3)),
(Language::Russian, WordList::new(include!("./classic/ru.rs"), 4)),
(Language::Esperanto, WordList::new(include!("./classic/eo.rs"), 4)),
(Language::Lojban, WordList::new(include!("./classic/jbo.rs"), 4)),
(Language::EnglishOld, WordList::new(include!("./classic/ang.rs"), 4)),
])
})
}
#[cfg(test)]
pub(crate) fn trim_by_lang(word: &str, lang: Language) -> String {
if lang == Language::EnglishOld {
word.to_string()
} else {
word.chars().take(LANGUAGES()[&lang].unique_prefix_length).collect()
}
}
fn checksum_index(words: &[Zeroizing<String>], lang: &WordList) -> usize {
let mut trimmed_words = Zeroizing::new(String::new());
for w in words {
*trimmed_words += &trim(w, lang.unique_prefix_length);
}
let crc = Crc::<u32>::new(&CRC_32_ISO_HDLC);
let mut digest = crc.digest();
digest.update(trimmed_words.as_bytes());
usize::try_from(digest.finalize()).unwrap() % words.len()
}
// Convert a private key to a seed
fn key_to_seed(lang: Language, key: Zeroizing<Scalar>) -> ClassicSeed {
let bytes = Zeroizing::new(key.to_bytes());
// get the language words
let words = &LANGUAGES()[&lang].word_list;
let list_len = u64::try_from(words.len()).unwrap();
// To store the found words & add the checksum word later.
let mut seed = Vec::with_capacity(25);
// convert to words
// 4 bytes -> 3 words. 8 digits base 16 -> 3 digits base 1626
let mut segment = [0; 4];
let mut indices = [0; 4];
for i in 0 .. 8 {
// convert first 4 byte to u32 & get the word indices
let start = i * 4;
// convert 4 byte to u32
segment.copy_from_slice(&bytes[start .. (start + 4)]);
// Actually convert to a u64 so we can add without overflowing
indices[0] = u64::from(u32::from_le_bytes(segment));
indices[1] = indices[0];
indices[0] /= list_len;
indices[2] = indices[0] + indices[1];
indices[0] /= list_len;
indices[3] = indices[0] + indices[2];
// append words to seed
for i in indices.iter().skip(1) {
let word = usize::try_from(i % list_len).unwrap();
seed.push(Zeroizing::new(words[word].to_string()));
}
}
segment.zeroize();
indices.zeroize();
// create a checksum word for all languages except old english
if lang != Language::EnglishOld {
let checksum = seed[checksum_index(&seed, &LANGUAGES()[&lang])].clone();
seed.push(checksum);
}
let mut res = Zeroizing::new(String::new());
for (i, word) in seed.iter().enumerate() {
if i != 0 {
*res += " ";
}
*res += word;
}
ClassicSeed(res)
}
// Convert a seed to bytes
pub(crate) fn seed_to_bytes(words: &str) -> Result<(Language, Zeroizing<[u8; 32]>), SeedError> {
// get seed words
let words = words.split_whitespace().map(|w| Zeroizing::new(w.to_string())).collect::<Vec<_>>();
if (words.len() != CLASSIC_SEED_LENGTH) && (words.len() != CLASSIC_SEED_LENGTH_WITH_CHECKSUM) {
panic!("invalid seed passed to seed_to_bytes");
}
// find the language
let (matched_indices, lang_name, lang) = (|| {
let has_checksum = words.len() == CLASSIC_SEED_LENGTH_WITH_CHECKSUM;
let mut matched_indices = Zeroizing::new(vec![]);
// Iterate through all the languages
'language: for (lang_name, lang) in LANGUAGES().iter() {
matched_indices.zeroize();
matched_indices.clear();
// Iterate through all the words and see if they're all present
for word in &words {
let trimmed = trim(word, lang.unique_prefix_length);
let word = if has_checksum { &trimmed } else { word };
if let Some(index) = if has_checksum {
lang.trimmed_word_map.get(word.deref())
} else {
lang.word_map.get(&word.as_str())
} {
matched_indices.push(*index);
} else {
continue 'language;
}
}
if has_checksum {
if lang_name == &Language::EnglishOld {
Err(SeedError::EnglishOldWithChecksum)?;
}
// exclude the last word when calculating a checksum.
let last_word = words.last().unwrap().clone();
let checksum = words[checksum_index(&words[.. words.len() - 1], lang)].clone();
// check the trimmed checksum and trimmed last word line up
if trim(&checksum, lang.unique_prefix_length) != trim(&last_word, lang.unique_prefix_length)
{
Err(SeedError::InvalidChecksum)?;
}
}
return Ok((matched_indices, lang_name, lang));
}
Err(SeedError::UnknownLanguage)?
})()?;
// convert to bytes
let mut res = Zeroizing::new([0; 32]);
let mut indices = Zeroizing::new([0; 4]);
for i in 0 .. 8 {
// read 3 indices at a time
let i3 = i * 3;
indices[1] = matched_indices[i3];
indices[2] = matched_indices[i3 + 1];
indices[3] = matched_indices[i3 + 2];
let inner = |i| {
let mut base = (lang.word_list.len() - indices[i] + indices[i + 1]) % lang.word_list.len();
// Shift the index over
for _ in 0 .. i {
base *= lang.word_list.len();
}
base
};
// set the last index
indices[0] = indices[1] + inner(1) + inner(2);
if (indices[0] % lang.word_list.len()) != indices[1] {
Err(SeedError::InvalidSeed)?;
}
let pos = i * 4;
let mut bytes = u32::try_from(indices[0]).unwrap().to_le_bytes();
res[pos .. (pos + 4)].copy_from_slice(&bytes);
bytes.zeroize();
}
Ok((*lang_name, res))
}
#[derive(Clone, PartialEq, Eq, Zeroize)]
pub struct ClassicSeed(Zeroizing<String>);
impl ClassicSeed {
pub(crate) fn new<R: RngCore + CryptoRng>(rng: &mut R, lang: Language) -> Self {
key_to_seed(lang, Zeroizing::new(random_scalar(rng)))
}
pub fn from_string(words: Zeroizing<String>) -> Result<Self, SeedError> {
let (lang, entropy) = seed_to_bytes(&words)?;
// Make sure this is a valid scalar
let mut scalar = Scalar::from_canonical_bytes(*entropy);
if scalar.is_none() {
Err(SeedError::InvalidSeed)?;
}
scalar.zeroize();
// Call from_entropy so a trimmed seed becomes a full seed
Ok(Self::from_entropy(lang, entropy).unwrap())
}
pub fn from_entropy(lang: Language, entropy: Zeroizing<[u8; 32]>) -> Option<Self> {
Scalar::from_canonical_bytes(*entropy).map(|scalar| key_to_seed(lang, Zeroizing::new(scalar)))
}
pub(crate) fn to_string(&self) -> Zeroizing<String> {
self.0.clone()
}
pub(crate) fn entropy(&self) -> Zeroizing<[u8; 32]> {
seed_to_bytes(&self.0).unwrap().1
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,92 @@
use core::fmt;
use std_shims::string::String;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use rand_core::{RngCore, CryptoRng};
pub(crate) mod classic;
use classic::{CLASSIC_SEED_LENGTH, CLASSIC_SEED_LENGTH_WITH_CHECKSUM, ClassicSeed};
/// Error when decoding a seed.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum SeedError {
#[cfg_attr(feature = "std", error("invalid number of words in seed"))]
InvalidSeedLength,
#[cfg_attr(feature = "std", error("unknown language"))]
UnknownLanguage,
#[cfg_attr(feature = "std", error("invalid checksum"))]
InvalidChecksum,
#[cfg_attr(feature = "std", error("english old seeds don't support checksums"))]
EnglishOldWithChecksum,
#[cfg_attr(feature = "std", error("invalid seed"))]
InvalidSeed,
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Hash)]
pub enum Language {
Chinese,
English,
Dutch,
French,
Spanish,
German,
Italian,
Portuguese,
Japanese,
Russian,
Esperanto,
Lojban,
EnglishOld,
}
/// A Monero seed.
// TODO: Add polyseed to enum
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub enum Seed {
Classic(ClassicSeed),
}
impl fmt::Debug for Seed {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
match self {
Self::Classic(_) => f.debug_struct("Seed::Classic").finish_non_exhaustive(),
}
}
}
impl Seed {
/// Create a new seed.
pub fn new<R: RngCore + CryptoRng>(rng: &mut R, lang: Language) -> Self {
Self::Classic(ClassicSeed::new(rng, lang))
}
/// Parse a seed from a String.
pub fn from_string(words: Zeroizing<String>) -> Result<Self, SeedError> {
match words.split_whitespace().count() {
CLASSIC_SEED_LENGTH | CLASSIC_SEED_LENGTH_WITH_CHECKSUM => {
ClassicSeed::from_string(words).map(Self::Classic)
}
_ => Err(SeedError::InvalidSeedLength)?,
}
}
/// Create a Seed from entropy.
pub fn from_entropy(lang: Language, entropy: Zeroizing<[u8; 32]>) -> Option<Self> {
ClassicSeed::from_entropy(lang, entropy).map(Self::Classic)
}
/// Convert a seed to a String.
pub fn to_string(&self) -> Zeroizing<String> {
match self {
Self::Classic(seed) => seed.to_string(),
}
}
/// Return the entropy for this seed.
pub fn entropy(&self) -> Zeroizing<[u8; 32]> {
match self {
Self::Classic(seed) => seed.entropy(),
}
}
}

View File

@@ -1,12 +1,12 @@
use std::sync::{Arc, RwLock};
use zeroize::{Zeroize, ZeroizeOnDrop};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use crate::{
Protocol,
wallet::{
address::MoneroAddress, Fee, SpendableOutput, SignableTransaction, TransactionError,
extra::MAX_TX_EXTRA_NONCE_SIZE,
address::MoneroAddress, Fee, SpendableOutput, Change, SignableTransaction, TransactionError,
extra::MAX_ARBITRARY_DATA_SIZE,
},
};
@@ -15,17 +15,30 @@ struct SignableTransactionBuilderInternal {
protocol: Protocol,
fee: Fee,
r_seed: Option<Zeroizing<[u8; 32]>>,
inputs: Vec<SpendableOutput>,
payments: Vec<(MoneroAddress, u64)>,
change_address: Option<MoneroAddress>,
change_address: Option<Change>,
data: Vec<Vec<u8>>,
}
impl SignableTransactionBuilderInternal {
// Takes in the change address so users don't miss that they have to manually set one
// If they don't, all leftover funds will become part of the fee
fn new(protocol: Protocol, fee: Fee, change_address: Option<MoneroAddress>) -> Self {
Self { protocol, fee, inputs: vec![], payments: vec![], change_address, data: vec![] }
fn new(protocol: Protocol, fee: Fee, change_address: Option<Change>) -> Self {
Self {
protocol,
fee,
r_seed: None,
inputs: vec![],
payments: vec![],
change_address,
data: vec![],
}
}
fn set_r_seed(&mut self, r_seed: Zeroizing<[u8; 32]>) {
self.r_seed = Some(r_seed);
}
fn add_input(&mut self, input: SpendableOutput) {
@@ -68,16 +81,17 @@ impl Eq for SignableTransactionBuilder {}
impl Zeroize for SignableTransactionBuilder {
fn zeroize(&mut self) {
self.0.write().unwrap().zeroize()
self.0.write().unwrap().zeroize();
}
}
#[allow(clippy::return_self_not_must_use)]
impl SignableTransactionBuilder {
fn shallow_copy(&self) -> Self {
Self(self.0.clone())
}
pub fn new(protocol: Protocol, fee: Fee, change_address: Option<MoneroAddress>) -> Self {
pub fn new(protocol: Protocol, fee: Fee, change_address: Option<Change>) -> Self {
Self(Arc::new(RwLock::new(SignableTransactionBuilderInternal::new(
protocol,
fee,
@@ -85,6 +99,11 @@ impl SignableTransactionBuilder {
))))
}
pub fn set_r_seed(&mut self, r_seed: Zeroizing<[u8; 32]>) -> Self {
self.0.write().unwrap().set_r_seed(r_seed);
self.shallow_copy()
}
pub fn add_input(&mut self, input: SpendableOutput) -> Self {
self.0.write().unwrap().add_input(input);
self.shallow_copy()
@@ -104,7 +123,7 @@ impl SignableTransactionBuilder {
}
pub fn add_data(&mut self, data: Vec<u8>) -> Result<Self, TransactionError> {
if data.len() > MAX_TX_EXTRA_NONCE_SIZE {
if data.len() > MAX_ARBITRARY_DATA_SIZE {
Err(TransactionError::TooMuchData)?;
}
self.0.write().unwrap().add_data(data);
@@ -115,9 +134,10 @@ impl SignableTransactionBuilder {
let read = self.0.read().unwrap();
SignableTransaction::new(
read.protocol,
read.r_seed.clone(),
read.inputs.clone(),
read.payments.clone(),
read.change_address,
read.change_address.clone(),
read.data.clone(),
read.fee,
)

View File

@@ -1,19 +1,33 @@
use core::ops::Deref;
use core::{ops::Deref, fmt};
use std_shims::{
vec::Vec,
io,
string::{String, ToString},
};
use thiserror::Error;
use rand_core::{RngCore, CryptoRng};
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use rand::seq::SliceRandom;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
use group::Group;
use curve25519_dalek::{
constants::{ED25519_BASEPOINT_POINT, ED25519_BASEPOINT_TABLE},
scalar::Scalar,
edwards::EdwardsPoint,
};
use dalek_ff_group as dfg;
#[cfg(feature = "multisig")]
use frost::FrostError;
use crate::{
Protocol, Commitment, random_scalar,
Protocol, Commitment, hash, random_scalar,
serialize::{
read_byte, read_bytes, read_u64, read_scalar, read_point, read_vec, write_byte, write_scalar,
write_point, write_raw_vec, write_vec,
},
ringct::{
generate_key_image,
clsag::{ClsagError, ClsagInput, Clsag},
@@ -21,20 +35,25 @@ use crate::{
RctBase, RctPrunable, RctSignatures,
},
transaction::{Input, Output, Timelock, TransactionPrefix, Transaction},
rpc::{Rpc, RpcError},
rpc::{RpcError, RpcConnection, Rpc},
wallet::{
address::MoneroAddress, SpendableOutput, Decoys, PaymentId, ExtraField, Extra, key_image_sort,
uniqueness, shared_key, commitment_mask, amount_encryption, extra::MAX_TX_EXTRA_NONCE_SIZE,
address::{Network, AddressSpec, MoneroAddress},
ViewPair, SpendableOutput, Decoys, PaymentId, ExtraField, Extra, key_image_sort, uniqueness,
shared_key, commitment_mask, amount_encryption,
extra::{ARBITRARY_DATA_MARKER, MAX_ARBITRARY_DATA_SIZE},
},
};
#[cfg(feature = "std")]
mod builder;
#[cfg(feature = "std")]
pub use builder::SignableTransactionBuilder;
#[cfg(feature = "multisig")]
mod multisig;
#[cfg(feature = "multisig")]
pub use multisig::TransactionMachine;
use crate::ringct::EncryptedAmount;
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
@@ -47,25 +66,22 @@ struct SendOutput {
}
impl SendOutput {
fn new<R: RngCore + CryptoRng>(
rng: &mut R,
#[allow(non_snake_case)]
fn internal(
unique: [u8; 32],
output: (usize, (MoneroAddress, u64)),
) -> (SendOutput, Option<[u8; 8]>) {
ecdh: EdwardsPoint,
R: EdwardsPoint,
) -> (Self, Option<[u8; 8]>) {
let o = output.0;
let output = output.1;
let r = random_scalar(rng);
let (view_tag, shared_key, payment_id_xor) =
shared_key(Some(unique).filter(|_| output.0.meta.kind.guaranteed()), &r, &output.0.view, o);
shared_key(Some(unique).filter(|_| output.0.is_guaranteed()), ecdh, o);
(
SendOutput {
R: if !output.0.meta.kind.subaddress() {
&r * &ED25519_BASEPOINT_TABLE
} else {
r * output.0.spend
},
Self {
R,
view_tag,
dest: ((&shared_key * &ED25519_BASEPOINT_TABLE) + output.0.spend),
commitment: Commitment::new(commitment_mask(shared_key), output.1),
@@ -77,40 +93,69 @@ impl SendOutput {
.map(|id| (u64::from_le_bytes(id) ^ u64::from_le_bytes(payment_id_xor)).to_le_bytes()),
)
}
fn new(
r: &Zeroizing<Scalar>,
unique: [u8; 32],
output: (usize, (MoneroAddress, u64)),
) -> (Self, Option<[u8; 8]>) {
let address = output.1 .0;
Self::internal(
unique,
output,
r.deref() * address.view,
if address.is_subaddress() {
r.deref() * address.spend
} else {
r.deref() * &ED25519_BASEPOINT_TABLE
},
)
}
fn change(
ecdh: EdwardsPoint,
unique: [u8; 32],
output: (usize, (MoneroAddress, u64)),
) -> (Self, Option<[u8; 8]>) {
Self::internal(unique, output, ecdh, ED25519_BASEPOINT_POINT)
}
}
#[derive(Clone, PartialEq, Eq, Debug, Error)]
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum TransactionError {
#[error("multiple addresses with payment IDs")]
#[cfg_attr(feature = "std", error("multiple addresses with payment IDs"))]
MultiplePaymentIds,
#[error("no inputs")]
#[cfg_attr(feature = "std", error("no inputs"))]
NoInputs,
#[error("no outputs")]
#[cfg_attr(feature = "std", error("no outputs"))]
NoOutputs,
#[error("only one output and no change address")]
#[cfg_attr(feature = "std", error("only one output and no change address"))]
NoChange,
#[error("too many outputs")]
#[cfg_attr(feature = "std", error("too many outputs"))]
TooManyOutputs,
#[error("too much data")]
#[cfg_attr(feature = "std", error("too much data"))]
TooMuchData,
#[error("not enough funds (in {0}, out {1})")]
#[cfg_attr(feature = "std", error("too many inputs/too much arbitrary data"))]
TooLargeTransaction,
#[cfg_attr(feature = "std", error("not enough funds (in {0}, out {1})"))]
NotEnoughFunds(u64, u64),
#[error("wrong spend private key")]
#[cfg_attr(feature = "std", error("wrong spend private key"))]
WrongPrivateKey,
#[error("rpc error ({0})")]
#[cfg_attr(feature = "std", error("rpc error ({0})"))]
RpcError(RpcError),
#[error("clsag error ({0})")]
#[cfg_attr(feature = "std", error("clsag error ({0})"))]
ClsagError(ClsagError),
#[error("invalid transaction ({0})")]
#[cfg_attr(feature = "std", error("invalid transaction ({0})"))]
InvalidTransaction(RpcError),
#[cfg(feature = "multisig")]
#[error("frost error {0}")]
#[cfg_attr(feature = "std", error("frost error {0}"))]
FrostError(FrostError),
}
async fn prepare_inputs<R: RngCore + CryptoRng>(
async fn prepare_inputs<R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc,
rpc: &Rpc<RPC>,
ring_len: usize,
inputs: &[SpendableOutput],
spend: &Zeroizing<Scalar>,
@@ -123,7 +168,7 @@ async fn prepare_inputs<R: RngCore + CryptoRng>(
rng,
rpc,
ring_len,
rpc.get_height().await.map_err(TransactionError::RpcError)? - 10,
rpc.get_height().await.map_err(TransactionError::RpcError)? - 1,
inputs,
)
.await
@@ -140,7 +185,7 @@ async fn prepare_inputs<R: RngCore + CryptoRng>(
));
tx.prefix.inputs.push(Input::ToKey {
amount: 0,
amount: None,
key_offsets: decoys[i].offsets.clone(),
key_image: signable[i].1,
});
@@ -171,47 +216,111 @@ impl Fee {
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) enum InternalPayment {
Payment((MoneroAddress, u64)),
Change(Change, u64),
}
/// The eventual output of a SignableTransaction.
///
/// If the SignableTransaction has a Change with a view key, this will also have the view key.
/// Accordingly, it must be treated securely.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Eventuality {
protocol: Protocol,
r_seed: Zeroizing<[u8; 32]>,
inputs: Vec<EdwardsPoint>,
payments: Vec<InternalPayment>,
extra: Vec<u8>,
}
/// A signable transaction, either in a single-signer or multisig context.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct SignableTransaction {
protocol: Protocol,
r_seed: Option<Zeroizing<[u8; 32]>>,
inputs: Vec<SpendableOutput>,
payments: Vec<(MoneroAddress, u64)>,
payments: Vec<InternalPayment>,
data: Vec<Vec<u8>>,
fee: u64,
}
/// Specification for a change output.
#[derive(Clone, PartialEq, Eq, Zeroize)]
pub struct Change {
address: MoneroAddress,
view: Option<Zeroizing<Scalar>>,
}
impl fmt::Debug for Change {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_struct("Change").field("address", &self.address).finish_non_exhaustive()
}
}
impl Change {
/// Create a change output specification from a ViewPair, as needed to maintain privacy.
pub fn new(view: &ViewPair, guaranteed: bool) -> Self {
Self {
address: view.address(
Network::Mainnet,
if !guaranteed {
AddressSpec::Standard
} else {
AddressSpec::Featured { subaddress: None, payment_id: None, guaranteed: true }
},
),
view: Some(view.view.clone()),
}
}
/// Create a fingerprintable change output specification which will harm privacy.
///
/// Only use this if you know what you're doing.
pub const fn fingerprintable(address: MoneroAddress) -> Self {
Self { address, view: None }
}
}
impl SignableTransaction {
/// Create a signable transaction. If the change address is specified, leftover funds will be
/// sent to it. If the change address isn't specified, up to 16 outputs may be specified, using
/// any leftover funds as a bonus to the fee. The optional data field will be embedded in TX
/// extra.
/// Create a signable transaction.
///
/// `r_seed` refers to a seed used to derive the transaction's ephemeral keys (colloquially
/// called Rs). If None is provided, one will be automatically generated.
///
/// Up to 16 outputs may be present, including the change output. If the change address is
/// specified, leftover funds will be sent to it.
///
/// Each chunk of data must not exceed MAX_ARBITRARY_DATA_SIZE and will be embedded in TX extra.
pub fn new(
protocol: Protocol,
r_seed: Option<Zeroizing<[u8; 32]>>,
inputs: Vec<SpendableOutput>,
mut payments: Vec<(MoneroAddress, u64)>,
change_address: Option<MoneroAddress>,
payments: Vec<(MoneroAddress, u64)>,
change_address: Option<Change>,
data: Vec<Vec<u8>>,
fee_rate: Fee,
) -> Result<SignableTransaction, TransactionError> {
) -> Result<Self, TransactionError> {
// Make sure there's only one payment ID
{
let mut has_payment_id = {
let mut payment_ids = 0;
let mut count = |addr: MoneroAddress| {
if addr.payment_id().is_some() {
payment_ids += 1
payment_ids += 1;
}
};
for payment in &payments {
count(payment.0);
}
if let Some(change) = change_address {
count(change);
if let Some(change) = change_address.as_ref() {
count(change.address);
}
if payment_ids > 1 {
Err(TransactionError::MultiplePaymentIds)?;
}
}
payment_ids == 1
};
if inputs.is_empty() {
Err(TransactionError::NoInputs)?;
@@ -221,56 +330,265 @@ impl SignableTransaction {
}
for part in &data {
if part.len() > MAX_TX_EXTRA_NONCE_SIZE {
if part.len() > MAX_ARBITRARY_DATA_SIZE {
Err(TransactionError::TooMuchData)?;
}
}
// TODO TX MAX SIZE
// If we don't have two outputs, as required by Monero, add a second
let mut change = payments.len() == 1;
if change && change_address.is_none() {
// If we don't have two outputs, as required by Monero, error
if (payments.len() == 1) && change_address.is_none() {
Err(TransactionError::NoChange)?;
}
let outputs = payments.len() + usize::from(change);
let outputs = payments.len() + usize::from(change_address.is_some());
// Add a dummy payment ID if there's only 2 payments
has_payment_id |= outputs == 2;
// Calculate the extra length
let extra = Extra::fee_weight(outputs, data.as_ref());
// Assume additional keys are needed in order to cause a worst-case estimation
let extra = Extra::fee_weight(outputs, true, has_payment_id, data.as_ref());
// https://github.com/monero-project/monero/pull/8733
const MAX_EXTRA_SIZE: usize = 1060;
if extra > MAX_EXTRA_SIZE {
Err(TransactionError::TooMuchData)?;
}
// This is a extremely heavy fee weight estimation which can only be trusted for two things
// 1) Ensuring we have enough for whatever fee we end up using
// 2) Ensuring we aren't over the max size
let estimated_tx_size = Transaction::fee_weight(protocol, inputs.len(), outputs, extra);
// The actual limit is half the block size, and for the minimum block size of 300k, that'd be
// 150k
// wallet2 will only create transactions up to 100k bytes however
const MAX_TX_SIZE: usize = 100_000;
// This uses the weight (estimated_tx_size) despite the BP clawback
// The clawback *increases* the weight, so this will over-estimate, yet it's still safe
if estimated_tx_size >= MAX_TX_SIZE {
Err(TransactionError::TooLargeTransaction)?;
}
// Calculate the fee.
let mut fee =
fee_rate.calculate(Transaction::fee_weight(protocol, inputs.len(), outputs, extra));
let fee = fee_rate.calculate(estimated_tx_size);
// Make sure we have enough funds
let in_amount = inputs.iter().map(|input| input.commitment().amount).sum::<u64>();
let mut out_amount = payments.iter().map(|payment| payment.1).sum::<u64>() + fee;
let out_amount = payments.iter().map(|payment| payment.1).sum::<u64>() + fee;
if in_amount < out_amount {
Err(TransactionError::NotEnoughFunds(in_amount, out_amount))?;
}
// If we have yet to add a change output, do so if it's economically viable
if (!change) && change_address.is_some() && (in_amount != out_amount) {
// Check even with the new fee, there's remaining funds
let change_fee =
fee_rate.calculate(Transaction::fee_weight(protocol, inputs.len(), outputs + 1, extra)) -
fee;
if (out_amount + change_fee) < in_amount {
change = true;
out_amount += change_fee;
fee += change_fee;
}
}
if change {
payments.push((change_address.unwrap(), in_amount - out_amount));
}
if payments.len() > MAX_OUTPUTS {
if outputs > MAX_OUTPUTS {
Err(TransactionError::TooManyOutputs)?;
}
Ok(SignableTransaction { protocol, inputs, payments, data, fee })
let mut payments = payments.into_iter().map(InternalPayment::Payment).collect::<Vec<_>>();
if let Some(change) = change_address {
payments.push(InternalPayment::Change(change, in_amount - out_amount));
}
Ok(Self { protocol, r_seed, inputs, payments, data, fee })
}
pub fn fee(&self) -> u64 {
self.fee
}
#[allow(clippy::type_complexity)]
fn prepare_payments(
seed: &Zeroizing<[u8; 32]>,
inputs: &[EdwardsPoint],
payments: &mut Vec<InternalPayment>,
uniqueness: [u8; 32],
) -> (EdwardsPoint, Vec<Zeroizing<Scalar>>, Vec<SendOutput>, Option<[u8; 8]>) {
let mut rng = {
// Hash the inputs into the seed so we don't re-use Rs
// Doesn't re-use uniqueness as that's based on key images, which requires interactivity
// to generate. The output keys do not
// This remains private so long as the seed is private
let mut r_uniqueness = vec![];
for input in inputs {
r_uniqueness.extend(input.compress().to_bytes());
}
ChaCha20Rng::from_seed(hash(
&[b"monero-serai_outputs".as_ref(), seed.as_ref(), &r_uniqueness].concat(),
))
};
// Shuffle the payments
payments.shuffle(&mut rng);
// Used for all non-subaddress outputs, or if there's only one subaddress output and a change
let tx_key = Zeroizing::new(random_scalar(&mut rng));
let mut tx_public_key = tx_key.deref() * &ED25519_BASEPOINT_TABLE;
// If any of these outputs are to a subaddress, we need keys distinct to them
// The only time this *does not* force having additional keys is when the only other output
// is a change output we have the view key for, enabling rewriting rA to aR
let mut has_change_view = false;
let subaddresses = payments
.iter()
.filter(|payment| match *payment {
InternalPayment::Payment(payment) => payment.0.is_subaddress(),
InternalPayment::Change(change, _) => {
if change.view.is_some() {
has_change_view = true;
// It should not be possible to construct a change specification to a subaddress with a
// view key
debug_assert!(!change.address.is_subaddress());
}
change.address.is_subaddress()
}
})
.count() !=
0;
// We need additional keys if we have any subaddresses
// UNLESS there's only two payments and we have the view-key for the change output
let additional = if (payments.len() == 2) && has_change_view { false } else { subaddresses };
let modified_change_ecdh = subaddresses && (!additional);
// If we're using the aR rewrite, update tx_public_key from rG to rB
if modified_change_ecdh {
for payment in &*payments {
match payment {
InternalPayment::Payment(payment) => {
// This should be the only payment and it should be a subaddress
debug_assert!(payment.0.is_subaddress());
tx_public_key = tx_key.deref() * payment.0.spend;
}
InternalPayment::Change(_, _) => {}
}
}
debug_assert!(tx_public_key != (tx_key.deref() * &ED25519_BASEPOINT_TABLE));
}
// Actually create the outputs
let mut additional_keys = vec![];
let mut outputs = Vec::with_capacity(payments.len());
let mut id = None;
for (o, mut payment) in payments.drain(..).enumerate() {
// Downcast the change output to a payment output if it doesn't require special handling
// regarding it's view key
payment = if !modified_change_ecdh {
if let InternalPayment::Change(change, amount) = &payment {
InternalPayment::Payment((change.address, *amount))
} else {
payment
}
} else {
payment
};
let (output, payment_id) = match payment {
InternalPayment::Payment(payment) => {
// If this is a subaddress, generate a dedicated r. Else, reuse the TX key
let dedicated = Zeroizing::new(random_scalar(&mut rng));
let use_dedicated = additional && payment.0.is_subaddress();
let r = if use_dedicated { &dedicated } else { &tx_key };
let (mut output, payment_id) = SendOutput::new(r, uniqueness, (o, payment));
if modified_change_ecdh {
debug_assert_eq!(tx_public_key, output.R);
}
if use_dedicated {
additional_keys.push(dedicated);
} else {
// If this used tx_key, randomize its R
// This is so when extra is created, there's a distinct R for it to use
output.R = dfg::EdwardsPoint::random(&mut rng).0;
}
(output, payment_id)
}
InternalPayment::Change(change, amount) => {
// Instead of rA, use Ra, where R is r * subaddress_spend_key
// change.view must be Some as if it's None, this payment would've been downcast
let ecdh = tx_public_key * change.view.unwrap().deref();
SendOutput::change(ecdh, uniqueness, (o, (change.address, amount)))
}
};
outputs.push(output);
id = id.or(payment_id);
}
// Include a random payment ID if we don't actually have one
// It prevents transactions from leaking if they're sending to integrated addresses or not
// Only do this if we only have two outputs though, as Monero won't add a dummy if there's
// more than two outputs
if outputs.len() <= 2 {
let mut rand = [0; 8];
rng.fill_bytes(&mut rand);
id = id.or(Some(rand));
}
(tx_public_key, additional_keys, outputs, id)
}
#[allow(non_snake_case)]
fn extra(
tx_key: EdwardsPoint,
additional: bool,
Rs: Vec<EdwardsPoint>,
id: Option<[u8; 8]>,
data: &mut Vec<Vec<u8>>,
) -> Vec<u8> {
#[allow(non_snake_case)]
let Rs_len = Rs.len();
let mut extra = Extra::new(tx_key, if additional { Rs } else { vec![] });
if let Some(id) = id {
let mut id_vec = Vec::with_capacity(1 + 8);
PaymentId::Encrypted(id).write(&mut id_vec).unwrap();
extra.push(ExtraField::Nonce(id_vec));
}
// Include data if present
let extra_len = Extra::fee_weight(Rs_len, additional, id.is_some(), data.as_ref());
for part in data.drain(..) {
let mut arb = vec![ARBITRARY_DATA_MARKER];
arb.extend(part);
extra.push(ExtraField::Nonce(arb));
}
let mut serialized = Vec::with_capacity(extra_len);
extra.write(&mut serialized).unwrap();
debug_assert_eq!(extra_len, serialized.len());
serialized
}
/// Returns the eventuality of this transaction.
///
/// The eventuality is defined as the TX extra/outputs this transaction will create, if signed
/// with the specified seed. This eventuality can be compared to on-chain transactions to see
/// if the transaction has already been signed and published.
pub fn eventuality(&self) -> Option<Eventuality> {
let inputs = self.inputs.iter().map(SpendableOutput::key).collect::<Vec<_>>();
let (tx_key, additional, outputs, id) = Self::prepare_payments(
self.r_seed.as_ref()?,
&inputs,
&mut self.payments.clone(),
// Lie about the uniqueness, used when determining output keys/commitments yet not the
// ephemeral keys, which is want we want here
// While we do still grab the outputs variable, it's so we can get its Rs
[0; 32],
);
#[allow(non_snake_case)]
let Rs = outputs.iter().map(|output| output.R).collect();
drop(outputs);
let additional = !additional.is_empty();
let extra = Self::extra(tx_key, additional, Rs, id, &mut self.data.clone());
Some(Eventuality {
protocol: self.protocol,
r_seed: self.r_seed.clone()?,
inputs,
payments: self.payments.clone(),
extra,
})
}
fn prepare_transaction<R: RngCore + CryptoRng>(
@@ -278,27 +596,21 @@ impl SignableTransaction {
rng: &mut R,
uniqueness: [u8; 32],
) -> (Transaction, Scalar) {
// Shuffle the payments
self.payments.shuffle(rng);
// If no seed for the ephemeral keys was provided, make one
let r_seed = self.r_seed.clone().unwrap_or_else(|| {
let mut res = Zeroizing::new([0; 32]);
rng.fill_bytes(res.as_mut());
res
});
// Actually create the outputs
let mut outputs = Vec::with_capacity(self.payments.len());
let mut id = None;
for payment in self.payments.drain(..).enumerate() {
let (output, payment_id) = SendOutput::new(rng, uniqueness, payment);
outputs.push(output);
id = id.or(payment_id);
}
// Include a random payment ID if we don't actually have one
// It prevents transactions from leaking if they're sending to integrated addresses or not
let id = if let Some(id) = id {
id
} else {
let mut id = [0; 8];
rng.fill_bytes(&mut id);
id
};
let (tx_key, additional, outputs, id) = Self::prepare_payments(
&r_seed,
&self.inputs.iter().map(SpendableOutput::key).collect::<Vec<_>>(),
&mut self.payments,
uniqueness,
);
// This function only cares if additional keys were necessary, not what they were
let additional = !additional.is_empty();
let commitments = outputs.iter().map(|output| output.commitment.clone()).collect::<Vec<_>>();
let sum = commitments.iter().map(|commitment| commitment.mask).sum();
@@ -307,32 +619,25 @@ impl SignableTransaction {
let bp = Bulletproofs::prove(rng, &commitments, self.protocol.bp_plus()).unwrap();
// Create the TX extra
let extra = {
let mut extra = Extra::new(outputs.iter().map(|output| output.R).collect());
let mut id_vec = Vec::with_capacity(1 + 8);
PaymentId::Encrypted(id).write(&mut id_vec).unwrap();
extra.push(ExtraField::Nonce(id_vec));
// Include data if present
for part in self.data.drain(..) {
extra.push(ExtraField::Nonce(part));
}
let mut serialized = Vec::with_capacity(Extra::fee_weight(outputs.len(), self.data.as_ref()));
extra.write(&mut serialized).unwrap();
serialized
};
let extra = Self::extra(
tx_key,
additional,
outputs.iter().map(|output| output.R).collect(),
id,
&mut self.data,
);
let mut fee = self.inputs.iter().map(|input| input.commitment().amount).sum::<u64>();
let mut tx_outputs = Vec::with_capacity(outputs.len());
let mut ecdh_info = Vec::with_capacity(outputs.len());
let mut encrypted_amounts = Vec::with_capacity(outputs.len());
for output in &outputs {
fee -= output.commitment.amount;
tx_outputs.push(Output {
amount: 0,
amount: None,
key: output.dest.compress(),
view_tag: Some(output.view_tag).filter(|_| matches!(self.protocol, Protocol::v16)),
});
ecdh_info.push(output.amount);
encrypted_amounts.push(EncryptedAmount::Compact { amount: output.amount });
}
(
@@ -347,15 +652,12 @@ impl SignableTransaction {
signatures: vec![],
rct_signatures: RctSignatures {
base: RctBase {
fee: self.fee,
ecdh_info,
commitments: commitments.iter().map(|commitment| commitment.calculate()).collect(),
},
prunable: RctPrunable::Clsag {
bulletproofs: vec![bp],
clsags: vec![],
fee,
encrypted_amounts,
pseudo_outs: vec![],
commitments: commitments.iter().map(Commitment::calculate).collect(),
},
prunable: RctPrunable::Clsag { bulletproofs: bp, clsags: vec![], pseudo_outs: vec![] },
},
},
sum,
@@ -363,10 +665,10 @@ impl SignableTransaction {
}
/// Sign this transaction.
pub async fn sign<R: RngCore + CryptoRng>(
pub async fn sign<R: Send + RngCore + CryptoRng, RPC: RpcConnection>(
mut self,
rng: &mut R,
rpc: &Rpc,
rpc: &Rpc<RPC>,
spend: &Zeroizing<Scalar>,
) -> Result<Transaction, TransactionError> {
let mut images = Vec::with_capacity(self.inputs.len());
@@ -386,7 +688,7 @@ impl SignableTransaction {
uniqueness(
&images
.iter()
.map(|image| Input::ToKey { amount: 0, key_offsets: vec![], key_image: *image })
.map(|image| Input::ToKey { amount: None, key_offsets: vec![], key_image: *image })
.collect::<Vec<_>>(),
),
);
@@ -401,7 +703,152 @@ impl SignableTransaction {
clsags.append(&mut clsag_pairs.iter().map(|clsag| clsag.0.clone()).collect::<Vec<_>>());
pseudo_outs.append(&mut clsag_pairs.iter().map(|clsag| clsag.1).collect::<Vec<_>>());
}
RctPrunable::MlsagBorromean { .. } | RctPrunable::MlsagBulletproofs { .. } => {
unreachable!("attempted to sign a TX which wasn't CLSAG")
}
}
Ok(tx)
}
}
impl Eventuality {
/// Enables building a HashMap of Extra -> Eventuality for efficiently checking if an on-chain
/// transaction may match this eventuality.
///
/// This extra is cryptographically bound to:
/// 1) A specific set of inputs (via their output key)
/// 2) A specific seed for the ephemeral keys
///
/// This extra may be used with a transaction with a distinct set of inputs, yet no honest
/// transaction which doesn't satisfy this Eventuality will contain it.
pub fn extra(&self) -> &[u8] {
&self.extra
}
#[must_use]
pub fn matches(&self, tx: &Transaction) -> bool {
if self.payments.len() != tx.prefix.outputs.len() {
return false;
}
// Verify extra.
// Even if all the outputs were correct, a malicious extra could still cause a recipient to
// fail to receive their funds.
// This is the cheapest check available to perform as it does not require TX-specific ECC ops.
if self.extra != tx.prefix.extra {
return false;
}
// Also ensure no timelock was set.
if tx.prefix.timelock != Timelock::None {
return false;
}
// Generate the outputs. This is TX-specific due to uniqueness.
let (_, _, outputs, _) = SignableTransaction::prepare_payments(
&self.r_seed,
&self.inputs,
&mut self.payments.clone(),
uniqueness(&tx.prefix.inputs),
);
let rct_type = tx.rct_signatures.rct_type();
if rct_type != self.protocol.optimal_rct_type() {
return false;
}
// TODO: Remove this when the following for loop is updated
assert!(
rct_type.compact_encrypted_amounts(),
"created an Eventuality for a very old RctType we don't support proving for"
);
for (o, (expected, actual)) in outputs.iter().zip(tx.prefix.outputs.iter()).enumerate() {
// Verify the output, commitment, and encrypted amount.
if (&Output {
amount: None,
key: expected.dest.compress(),
view_tag: Some(expected.view_tag).filter(|_| matches!(self.protocol, Protocol::v16)),
} != actual) ||
(Some(&expected.commitment.calculate()) != tx.rct_signatures.base.commitments.get(o)) ||
(Some(&EncryptedAmount::Compact { amount: expected.amount }) !=
tx.rct_signatures.base.encrypted_amounts.get(o))
{
return false;
}
}
true
}
pub fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
self.protocol.write(w)?;
write_raw_vec(write_byte, self.r_seed.as_ref(), w)?;
write_vec(write_point, &self.inputs, w)?;
fn write_payment<W: io::Write>(payment: &InternalPayment, w: &mut W) -> io::Result<()> {
match payment {
InternalPayment::Payment(payment) => {
w.write_all(&[0])?;
write_vec(write_byte, payment.0.to_string().as_bytes(), w)?;
w.write_all(&payment.1.to_le_bytes())
}
InternalPayment::Change(change, amount) => {
w.write_all(&[1])?;
write_vec(write_byte, change.address.to_string().as_bytes(), w)?;
if let Some(view) = change.view.as_ref() {
w.write_all(&[1])?;
write_scalar(view, w)?;
} else {
w.write_all(&[0])?;
}
w.write_all(&amount.to_le_bytes())
}
}
}
write_vec(write_payment, &self.payments, w)?;
write_vec(write_byte, &self.extra, w)
}
pub fn serialize(&self) -> Vec<u8> {
let mut buf = Vec::with_capacity(128);
self.write(&mut buf).unwrap();
buf
}
pub fn read<R: io::Read>(r: &mut R) -> io::Result<Self> {
fn read_address<R: io::Read>(r: &mut R) -> io::Result<MoneroAddress> {
String::from_utf8(read_vec(read_byte, r)?)
.ok()
.and_then(|str| MoneroAddress::from_str_raw(&str).ok())
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid address"))
}
fn read_payment<R: io::Read>(r: &mut R) -> io::Result<InternalPayment> {
Ok(match read_byte(r)? {
0 => InternalPayment::Payment((read_address(r)?, read_u64(r)?)),
1 => InternalPayment::Change(
Change {
address: read_address(r)?,
view: match read_byte(r)? {
0 => None,
1 => Some(Zeroizing::new(read_scalar(r)?)),
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid change payment"))?,
},
},
read_u64(r)?,
),
_ => Err(io::Error::new(io::ErrorKind::Other, "invalid payment"))?,
})
}
Ok(Self {
protocol: Protocol::read(r)?,
r_seed: Zeroizing::new(read_bytes::<_, 32>(r)?),
inputs: read_vec(read_point, r)?,
payments: read_vec(read_payment, r)?,
extra: read_vec(read_byte, r)?,
})
}
}

View File

@@ -1,8 +1,12 @@
use std::{
use std_shims::{
sync::Arc,
vec::Vec,
io::{self, Read},
sync::{Arc, RwLock},
collections::HashMap,
};
use std::sync::RwLock;
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
@@ -14,7 +18,7 @@ use dalek_ff_group as dfg;
use transcript::{Transcript, RecommendedTranscript};
use frost::{
curve::Ed25519,
FrostError, ThresholdKeys,
Participant, FrostError, ThresholdKeys,
sign::{
Writable, Preprocess, CachedPreprocess, SignatureShare, PreprocessMachine, SignMachine,
SignatureMachine, AlgorithmMachine, AlgorithmSignMachine, AlgorithmSignatureMachine,
@@ -28,14 +32,17 @@ use crate::{
RctPrunable,
},
transaction::{Input, Transaction},
rpc::Rpc,
wallet::{TransactionError, SignableTransaction, Decoys, key_image_sort, uniqueness},
rpc::{RpcConnection, Rpc},
wallet::{
TransactionError, InternalPayment, SignableTransaction, Decoys, key_image_sort, uniqueness,
},
};
/// FROST signing machine to produce a signed transaction.
pub struct TransactionMachine {
signable: SignableTransaction,
i: u16,
i: Participant,
transcript: RecommendedTranscript,
decoys: Vec<Decoys>,
@@ -48,7 +55,8 @@ pub struct TransactionMachine {
pub struct TransactionSignMachine {
signable: SignableTransaction,
i: u16,
i: Participant,
transcript: RecommendedTranscript,
decoys: Vec<Decoys>,
@@ -68,9 +76,9 @@ pub struct TransactionSignatureMachine {
impl SignableTransaction {
/// Create a FROST signing machine out of this signable transaction.
/// The height is the Monero blockchain height to synchronize around.
pub async fn multisig(
pub async fn multisig<RPC: RpcConnection>(
self,
rpc: &Rpc,
rpc: &Rpc<RPC>,
keys: ThresholdKeys<Ed25519>,
mut transcript: RecommendedTranscript,
height: usize,
@@ -89,15 +97,22 @@ impl SignableTransaction {
// multiple times, already breaking privacy there
transcript.domain_separate(b"monero_transaction");
// Include the height we're using for our data
// The data itself will be included, making this unnecessary, yet a lot of this is technically
// unnecessary. Anything which further increases security at almost no cost should be followed
transcript.append_message(b"height", u64::try_from(height).unwrap().to_le_bytes());
// Also include the spend_key as below only the key offset is included, so this transcripts the
// sum product
// Useful as transcripting the sum product effectively transcripts the key image, further
// guaranteeing the one time properties noted below
transcript.append_message(b"spend_key", keys.group_key().0.compress().to_bytes());
if let Some(r_seed) = &self.r_seed {
transcript.append_message(b"r_seed", r_seed);
}
for input in &self.inputs {
// These outputs can only be spent once. Therefore, it forces all RNGs derived from this
// transcript (such as the one used to create one time keys) to be unique
@@ -107,9 +122,21 @@ impl SignableTransaction {
// to determine RNG seeds and therefore the true spends
transcript.append_message(b"input_shared_key", input.key_offset().to_bytes());
}
for payment in &self.payments {
transcript.append_message(b"payment_address", payment.0.to_string().as_bytes());
transcript.append_message(b"payment_amount", payment.1.to_le_bytes());
match payment {
InternalPayment::Payment(payment) => {
transcript.append_message(b"payment_address", payment.0.to_string().as_bytes());
transcript.append_message(b"payment_amount", payment.1.to_le_bytes());
}
InternalPayment::Change(change, amount) => {
transcript.append_message(b"change_address", change.address.to_string().as_bytes());
if let Some(view) = change.view.as_ref() {
transcript.append_message(b"change_view_key", Zeroizing::new(view.to_bytes()));
}
transcript.append_message(b"change_amount", amount.to_le_bytes());
}
}
}
let mut key_images = vec![];
@@ -123,9 +150,9 @@ impl SignableTransaction {
let clsag = ClsagMultisig::new(transcript.clone(), input.key(), inputs[i].clone());
key_images.push((
clsag.H,
keys.current_offset().unwrap_or_else(dfg::Scalar::zero).0 + self.inputs[i].key_offset(),
keys.current_offset().unwrap_or(dfg::Scalar::ZERO).0 + self.inputs[i].key_offset(),
));
clsags.push(AlgorithmMachine::new(clsag, offset).map_err(TransactionError::FrostError)?);
clsags.push(AlgorithmMachine::new(clsag, offset));
}
// Select decoys
@@ -147,6 +174,7 @@ impl SignableTransaction {
Ok(TransactionMachine {
signable: self,
i: keys.params().i(),
transcript,
@@ -193,6 +221,7 @@ impl PreprocessMachine for TransactionMachine {
(
TransactionSignMachine {
signable: self.signable,
i: self.i,
transcript: self.transcript,
@@ -236,19 +265,18 @@ impl SignMachine<Transaction> for TransactionSignMachine {
fn sign(
mut self,
mut commitments: HashMap<u16, Self::Preprocess>,
mut commitments: HashMap<Participant, Self::Preprocess>,
msg: &[u8],
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
if !msg.is_empty() {
Err(FrostError::InternalError(
"message was passed to the TransactionMachine when it generates its own",
))?;
}
assert!(
msg.is_empty(),
"message was passed to the TransactionMachine when it generates its own"
);
// Find out who's included
// This may not be a valid set of signers yet the algorithm machine will error if it's not
commitments.remove(&self.i); // Remove, if it was included for some reason
let mut included = commitments.keys().into_iter().cloned().collect::<Vec<_>>();
let mut included = commitments.keys().copied().collect::<Vec<_>>();
included.push(self.i);
included.sort_unstable();
@@ -263,7 +291,7 @@ impl SignMachine<Transaction> for TransactionSignMachine {
// While each CLSAG will do this as they need to for security, they have their own
// transcripts cloned from this TX's initial premise's transcript. For our TX
// transcript to have the CLSAG data for entropy, it'll have to be added ourselves here
self.transcript.append_message(b"participant", (*l).to_be_bytes());
self.transcript.append_message(b"participant", (*l).to_bytes());
let preprocess = if *l == self.i {
self.our_preprocess[c].clone()
@@ -299,7 +327,7 @@ impl SignMachine<Transaction> for TransactionSignMachine {
// Remove our preprocess which shouldn't be here. It was just the easiest way to implement the
// above
for map in commitments.iter_mut() {
for map in &mut commitments {
map.remove(&self.i);
}
@@ -309,11 +337,12 @@ impl SignMachine<Transaction> for TransactionSignMachine {
sorted_images.sort_by(key_image_sort);
self.signable.prepare_transaction(
// Technically, r_seed is used for the transaction keys if it's provided
&mut ChaCha20Rng::from_seed(self.transcript.rng_seed(b"transaction_keys_bulletproofs")),
uniqueness(
&sorted_images
.iter()
.map(|image| Input::ToKey { amount: 0, key_offsets: vec![], key_image: *image })
.map(|image| Input::ToKey { amount: None, key_offsets: vec![], key_image: *image })
.collect::<Vec<_>>(),
),
)
@@ -338,15 +367,16 @@ impl SignMachine<Transaction> for TransactionSignMachine {
while !sorted.is_empty() {
let value = sorted.remove(0);
let mut mask = random_scalar(&mut rng);
if sorted.is_empty() {
mask = output_masks - sum_pseudo_outs;
let mask = if sorted.is_empty() {
output_masks - sum_pseudo_outs
} else {
let mask = random_scalar(&mut rng);
sum_pseudo_outs += mask;
}
mask
};
tx.prefix.inputs.push(Input::ToKey {
amount: 0,
amount: None,
key_offsets: value.2.offsets.clone(),
key_image: value.0,
});
@@ -389,7 +419,7 @@ impl SignatureMachine<Transaction> for TransactionSignatureMachine {
fn complete(
mut self,
shares: HashMap<u16, Self::SignatureShare>,
shares: HashMap<Participant, Self::SignatureShare>,
) -> Result<Transaction, FrostError> {
let mut tx = self.tx;
match tx.rct_signatures.prunable {
@@ -403,6 +433,9 @@ impl SignatureMachine<Transaction> for TransactionSignatureMachine {
pseudo_outs.push(pseudo_out);
}
}
RctPrunable::MlsagBorromean { .. } | RctPrunable::MlsagBulletproofs { .. } => {
unreachable!("attempted to sign a multisig TX which wasn't CLSAG")
}
}
Ok(tx)
}

View File

@@ -1,16 +1,18 @@
use monero_serai::{wallet::TransactionError, transaction::Transaction};
use monero_serai::{
transaction::Transaction,
wallet::{TransactionError, extra::MAX_ARBITRARY_DATA_SIZE},
};
mod runner;
test!(
add_single_data_less_than_255,
add_single_data_less_than_max,
(
|_, mut builder: Builder, addr| async move {
let arbitrary_data = vec![b'\0', 254];
let arbitrary_data = vec![b'\0'; MAX_ARBITRARY_DATA_SIZE - 1];
// make sure we can add to tx
let result = builder.add_data(arbitrary_data.clone());
assert!(result.is_ok());
builder.add_data(arbitrary_data.clone()).unwrap();
builder.add_payment(addr, 5);
(builder.build().unwrap(), (arbitrary_data,))
@@ -24,41 +26,44 @@ test!(
);
test!(
add_multiple_data_less_than_255,
add_multiple_data_less_than_max,
(
|_, mut builder: Builder, addr| async move {
let data = vec![b'\0', 254];
let mut data = vec![];
for b in 1 ..= 3 {
data.push(vec![b; MAX_ARBITRARY_DATA_SIZE - 1]);
}
// Add tx multiple times
for _ in 0 .. 5 {
let result = builder.add_data(data.clone());
assert!(result.is_ok());
// Add data multiple times
for data in &data {
builder.add_data(data.clone()).unwrap();
}
builder.add_payment(addr, 5);
(builder.build().unwrap(), data)
},
|_, tx: Transaction, mut scanner: Scanner, data: Vec<u8>| async move {
|_, tx: Transaction, mut scanner: Scanner, data: Vec<Vec<u8>>| async move {
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
assert_eq!(output.commitment().amount, 5);
assert_eq!(output.arbitrary_data(), vec![data; 5]);
assert_eq!(output.arbitrary_data(), data);
},
),
);
test!(
add_single_data_more_than_255,
add_single_data_more_than_max,
(
|_, mut builder: Builder, addr| async move {
// Make a data that is bigger than 255 bytes
let mut data = vec![b'a'; 256];
// Make a data that is bigger than the maximum
let mut data = vec![b'a'; MAX_ARBITRARY_DATA_SIZE + 1];
// Make sure we get an error if we try to add it to the TX
assert_eq!(builder.add_data(data.clone()), Err(TransactionError::TooMuchData));
// Reduce data size and retry. The data will now be 255 bytes long, exactly
// Reduce data size and retry. The data will now be 255 bytes long (including the added
// marker), exactly
data.pop();
assert!(builder.add_data(data.clone()).is_ok());
builder.add_data(data.clone()).unwrap();
builder.add_payment(addr, 5);
(builder.build().unwrap(), data)

View File

@@ -0,0 +1,79 @@
use curve25519_dalek::constants::ED25519_BASEPOINT_POINT;
use monero_serai::{
transaction::Transaction,
wallet::{
Eventuality,
address::{AddressType, AddressMeta, MoneroAddress},
},
};
mod runner;
test!(
eventuality,
(
|_, mut builder: Builder, _| async move {
// Add a standard address, a payment ID address, a subaddress, and a guaranteed address
// Each have their own slight implications to eventualities
builder.add_payment(
MoneroAddress::new(
AddressMeta::new(Network::Mainnet, AddressType::Standard),
ED25519_BASEPOINT_POINT,
ED25519_BASEPOINT_POINT,
),
1,
);
builder.add_payment(
MoneroAddress::new(
AddressMeta::new(Network::Mainnet, AddressType::Integrated([0xaa; 8])),
ED25519_BASEPOINT_POINT,
ED25519_BASEPOINT_POINT,
),
2,
);
builder.add_payment(
MoneroAddress::new(
AddressMeta::new(Network::Mainnet, AddressType::Subaddress),
ED25519_BASEPOINT_POINT,
ED25519_BASEPOINT_POINT,
),
3,
);
builder.add_payment(
MoneroAddress::new(
AddressMeta::new(
Network::Mainnet,
AddressType::Featured { subaddress: false, payment_id: None, guaranteed: true },
),
ED25519_BASEPOINT_POINT,
ED25519_BASEPOINT_POINT,
),
4,
);
builder.set_r_seed(Zeroizing::new([0xbb; 32]));
let tx = builder.build().unwrap();
let eventuality = tx.eventuality().unwrap();
assert_eq!(
eventuality,
Eventuality::read::<&[u8]>(&mut eventuality.serialize().as_ref()).unwrap()
);
(tx, eventuality)
},
|_, mut tx: Transaction, _, eventuality: Eventuality| async move {
// 4 explicitly outputs added and one change output
assert_eq!(tx.prefix.outputs.len(), 5);
// The eventuality's available extra should be the actual TX's
assert_eq!(tx.prefix.extra, eventuality.extra());
// The TX should match
assert!(eventuality.matches(&tx));
// Mutate the TX
tx.rct_signatures.base.commitments[0] += ED25519_BASEPOINT_POINT;
// Verify it no longer matches
assert!(!eventuality.matches(&tx));
},
),
);

View File

@@ -1,7 +1,5 @@
use core::ops::Deref;
use std::collections::HashSet;
use lazy_static::lazy_static;
use std_shims::{sync::OnceLock, collections::HashSet};
use zeroize::Zeroizing;
use rand_core::OsRng;
@@ -11,13 +9,13 @@ use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar};
use tokio::sync::Mutex;
use monero_serai::{
Protocol, random_scalar,
random_scalar,
rpc::{HttpRpc, Rpc},
wallet::{
ViewPair, Scanner,
address::{Network, AddressType, AddressSpec, AddressMeta, MoneroAddress},
SpendableOutput,
},
rpc::Rpc,
};
pub fn random_address() -> (Scalar, ViewPair, MoneroAddress) {
@@ -38,7 +36,7 @@ pub fn random_address() -> (Scalar, ViewPair, MoneroAddress) {
// TODO: Support transactions already on-chain
// TODO: Don't have a side effect of mining blocks more blocks than needed under race conditions
// TODO: mine as much as needed instead of default 10 blocks
pub async fn mine_until_unlocked(rpc: &Rpc, addr: &str, tx_hash: [u8; 32]) {
pub async fn mine_until_unlocked(rpc: &Rpc<HttpRpc>, addr: &str, tx_hash: [u8; 32]) {
// mine until tx is in a block
let mut height = rpc.get_height().await.unwrap();
let mut found = false;
@@ -59,7 +57,8 @@ pub async fn mine_until_unlocked(rpc: &Rpc, addr: &str, tx_hash: [u8; 32]) {
}
// Mines 60 blocks and returns an unlocked miner TX output.
pub async fn get_miner_tx_output(rpc: &Rpc, view: &ViewPair) -> SpendableOutput {
#[allow(dead_code)]
pub async fn get_miner_tx_output(rpc: &Rpc<HttpRpc>, view: &ViewPair) -> SpendableOutput {
let mut scanner = Scanner::from_view(view.clone(), Some(HashSet::new()));
// Mine 60 blocks to unlock a miner TX
@@ -73,8 +72,8 @@ pub async fn get_miner_tx_output(rpc: &Rpc, view: &ViewPair) -> SpendableOutput
scanner.scan(rpc, &block).await.unwrap().swap_remove(0).ignore_timelock().swap_remove(0)
}
pub async fn rpc() -> Rpc {
let rpc = Rpc::new("http://127.0.0.1:18081".to_string()).unwrap();
pub async fn rpc() -> Rpc<HttpRpc> {
let rpc = HttpRpc::new("http://127.0.0.1:18081".to_string()).unwrap();
// Only run once
if rpc.get_height().await.unwrap() != 1 {
@@ -90,22 +89,23 @@ pub async fn rpc() -> Rpc {
// Mine 40 blocks to ensure decoy availability
rpc.generate_blocks(&addr, 40).await.unwrap();
assert!(!matches!(rpc.get_protocol().await.unwrap(), Protocol::Unsupported(_)));
// Make sure we recognize the protocol
rpc.get_protocol().await.unwrap();
rpc
}
lazy_static! {
pub static ref SEQUENTIAL: Mutex<()> = Mutex::new(());
}
pub static SEQUENTIAL: OnceLock<Mutex<()>> = OnceLock::new();
#[macro_export]
macro_rules! async_sequential {
($(async fn $name: ident() $body: block)*) => {
$(
#[allow(clippy::tests_outside_test_module)]
#[tokio::test]
async fn $name() {
let guard = runner::SEQUENTIAL.lock().await;
let guard = runner::SEQUENTIAL.get_or_init(|| tokio::sync::Mutex::new(())).lock().await;
let local = tokio::task::LocalSet::new();
local.run_until(async move {
if let Err(err) = tokio::task::spawn_local(async move { $body }).await {
@@ -148,13 +148,14 @@ macro_rules! test {
#[cfg(feature = "multisig")]
use frost::{
curve::Ed25519,
Participant,
tests::{THRESHOLD, key_gen},
};
use monero_serai::{
random_scalar,
wallet::{
address::{Network, AddressSpec}, ViewPair, Scanner, SignableTransaction,
address::{Network, AddressSpec}, ViewPair, Scanner, Change, SignableTransaction,
SignableTransactionBuilder,
},
};
@@ -182,7 +183,7 @@ macro_rules! test {
#[cfg(not(feature = "multisig"))]
panic!("Multisig branch called without the multisig feature");
#[cfg(feature = "multisig")]
keys[&1].group_key().0
keys[&Participant::new(1).unwrap()].group_key().0
};
let rpc = rpc().await;
@@ -195,7 +196,13 @@ macro_rules! test {
let builder = SignableTransactionBuilder::new(
rpc.get_protocol().await.unwrap(),
rpc.get_fee().await.unwrap(),
Some(random_address().2),
Some(Change::new(
&ViewPair::new(
&random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng))
),
false
)),
);
let sign = |tx: SignableTransaction| {
@@ -212,7 +219,7 @@ macro_rules! test {
#[cfg(feature = "multisig")]
{
let mut machines = HashMap::new();
for i in 1 ..= THRESHOLD {
for i in (1 ..= THRESHOLD).map(|i| Participant::new(i).unwrap()) {
machines.insert(
i,
tx

View File

@@ -1,6 +1,7 @@
use monero_serai::{
wallet::{ReceivedOutput, SpendableOutput},
transaction::Transaction,
wallet::{extra::Extra, address::SubaddressIndex, ReceivedOutput, SpendableOutput},
rpc::Rpc,
};
mod runner;
@@ -36,8 +37,8 @@ test!(
},
),
(
|rpc, mut builder: Builder, addr, mut outputs: Vec<ReceivedOutput>| async move {
for output in outputs.drain(..) {
|rpc, mut builder: Builder, addr, outputs: Vec<ReceivedOutput>| async move {
for output in outputs {
builder.add_input(SpendableOutput::from(&rpc, output).await.unwrap());
}
builder.add_payment(addr, 6);
@@ -49,3 +50,69 @@ test!(
},
),
);
test!(
// Ideally, this would be single_R, yet it isn't feasible to apply allow(non_snake_case) here
single_r_subaddress_send,
(
// Consume this builder for an output we can use in the future
// This is needed because we can't get the input from the passed in builder
|_, mut builder: Builder, addr| async move {
builder.add_payment(addr, 1000000000000);
(builder.build().unwrap(), ())
},
|_, tx: Transaction, mut scanner: Scanner, _| async move {
let mut outputs = scanner.scan_transaction(&tx).not_locked();
outputs.sort_by(|x, y| x.commitment().amount.cmp(&y.commitment().amount));
assert_eq!(outputs[0].commitment().amount, 1000000000000);
outputs
},
),
(
|rpc: Rpc<_>, _, _, mut outputs: Vec<ReceivedOutput>| async move {
let change_view = ViewPair::new(
&random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng)),
);
let mut builder = SignableTransactionBuilder::new(
rpc.get_protocol().await.unwrap(),
rpc.get_fee().await.unwrap(),
Some(Change::new(&change_view, false)),
);
builder.add_input(SpendableOutput::from(&rpc, outputs.swap_remove(0)).await.unwrap());
// Send to a subaddress
let sub_view = ViewPair::new(
&random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE,
Zeroizing::new(random_scalar(&mut OsRng)),
);
builder.add_payment(
sub_view
.address(Network::Mainnet, AddressSpec::Subaddress(SubaddressIndex::new(0, 1).unwrap())),
1,
);
(builder.build().unwrap(), (change_view, sub_view))
},
|_, tx: Transaction, _, views: (ViewPair, ViewPair)| async move {
// Make sure the change can pick up its output
let mut change_scanner = Scanner::from_view(views.0, Some(HashSet::new()));
assert!(change_scanner.scan_transaction(&tx).not_locked().len() == 1);
// Make sure the subaddress can pick up its output
let mut sub_scanner = Scanner::from_view(views.1, Some(HashSet::new()));
sub_scanner.register_subaddress(SubaddressIndex::new(0, 1).unwrap());
let sub_outputs = sub_scanner.scan_transaction(&tx).not_locked();
assert!(sub_outputs.len() == 1);
assert_eq!(sub_outputs[0].commitment().amount, 1);
// Make sure only one R was included in TX extra
assert!(Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref())
.unwrap()
.keys()
.unwrap()
.1
.is_none());
},
),
);

View File

@@ -1,91 +0,0 @@
use std::{
collections::{HashSet, HashMap},
str::FromStr,
};
use rand_core::{RngCore, OsRng};
use monero_rpc::{
monero::{Amount, Address},
TransferOptions,
};
use monero_serai::{
wallet::address::{Network, AddressSpec, SubaddressIndex},
wallet::Scanner,
};
mod runner;
async fn test_from_wallet_rpc_to_self(spec: AddressSpec) {
let wallet_rpc =
monero_rpc::RpcClientBuilder::new().build("http://127.0.0.1:6061").unwrap().wallet();
let daemon_rpc = runner::rpc().await;
// initialize wallet rpc
let address_resp = wallet_rpc.get_address(0, None).await;
let wallet_rpc_addr = if address_resp.is_ok() {
address_resp.unwrap().address
} else {
wallet_rpc.create_wallet("test_wallet".to_string(), None, "English".to_string()).await.unwrap();
let addr = wallet_rpc.get_address(0, None).await.unwrap().address;
daemon_rpc.generate_blocks(&addr.to_string(), 70).await.unwrap();
addr
};
// make an addr
let (_, view_pair, _) = runner::random_address();
let addr = Address::from_str(&view_pair.address(Network::Mainnet, spec).to_string()[..]).unwrap();
// refresh & make a tx
wallet_rpc.refresh(None).await.unwrap();
let tx = wallet_rpc
.transfer(
HashMap::from([(addr, Amount::ONE_XMR)]),
monero_rpc::TransferPriority::Default,
TransferOptions::default(),
)
.await
.unwrap();
let tx_hash: [u8; 32] = tx.tx_hash.0.try_into().unwrap();
// unlock it
runner::mine_until_unlocked(&daemon_rpc, &wallet_rpc_addr.to_string(), tx_hash).await;
// create the scanner
let mut scanner = Scanner::from_view(view_pair, Some(HashSet::new()));
if let AddressSpec::Subaddress(index) = spec {
scanner.register_subaddress(index);
}
// retrieve it and confirm
let tx = daemon_rpc.get_transaction(tx_hash).await.unwrap();
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
match spec {
AddressSpec::Subaddress(index) => assert_eq!(output.metadata.subaddress, Some(index)),
AddressSpec::Integrated(payment_id) => {
assert_eq!(output.metadata.payment_id, payment_id);
assert_eq!(output.metadata.subaddress, None);
}
_ => assert_eq!(output.metadata.subaddress, None),
}
assert_eq!(output.commitment().amount, 1000000000000);
}
async_sequential!(
async fn test_receipt_of_wallet_rpc_tx_standard() {
test_from_wallet_rpc_to_self(AddressSpec::Standard).await;
}
async fn test_receipt_of_wallet_rpc_tx_subaddress() {
test_from_wallet_rpc_to_self(AddressSpec::Subaddress(SubaddressIndex::new(0, 1).unwrap()))
.await;
}
async fn test_receipt_of_wallet_rpc_tx_integrated() {
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
test_from_wallet_rpc_to_self(AddressSpec::Integrated(payment_id)).await;
}
);

View File

@@ -0,0 +1,247 @@
use std::{
collections::{HashSet, HashMap},
str::FromStr,
};
use rand_core::{OsRng, RngCore};
use serde::Deserialize;
use serde_json::json;
use monero_rpc::{
monero::{
Amount, Address,
cryptonote::{hash::Hash, subaddress::Index},
util::address::PaymentId,
},
TransferOptions, WalletClient,
};
use monero_serai::{
transaction::Transaction,
rpc::{HttpRpc, Rpc},
wallet::{
address::{Network, AddressSpec, SubaddressIndex, MoneroAddress},
extra::{MAX_TX_EXTRA_NONCE_SIZE, Extra},
Scanner,
},
};
mod runner;
async fn make_integrated_address(payment_id: [u8; 8]) -> String {
#[derive(Deserialize, Debug)]
struct IntegratedAddressResponse {
integrated_address: String,
}
let rpc = HttpRpc::new("http://127.0.0.1:6061".to_string()).unwrap();
let res = rpc
.json_rpc_call::<IntegratedAddressResponse>(
"make_integrated_address",
Some(json!({ "payment_id": hex::encode(payment_id) })),
)
.await
.unwrap();
res.integrated_address
}
async fn initialize_rpcs() -> (WalletClient, Rpc<HttpRpc>, monero_rpc::monero::Address) {
let wallet_rpc =
monero_rpc::RpcClientBuilder::new().build("http://127.0.0.1:6061").unwrap().wallet();
let daemon_rpc = runner::rpc().await;
let address_resp = wallet_rpc.get_address(0, None).await;
let wallet_rpc_addr = if address_resp.is_ok() {
address_resp.unwrap().address
} else {
wallet_rpc.create_wallet("wallet".to_string(), None, "English".to_string()).await.unwrap();
let addr = wallet_rpc.get_address(0, None).await.unwrap().address;
daemon_rpc.generate_blocks(&addr.to_string(), 70).await.unwrap();
addr
};
(wallet_rpc, daemon_rpc, wallet_rpc_addr)
}
async fn from_wallet_rpc_to_self(spec: AddressSpec) {
// initialize rpc
let (wallet_rpc, daemon_rpc, wallet_rpc_addr) = initialize_rpcs().await;
// make an addr
let (_, view_pair, _) = runner::random_address();
let addr = Address::from_str(&view_pair.address(Network::Mainnet, spec).to_string()).unwrap();
// refresh & make a tx
wallet_rpc.refresh(None).await.unwrap();
let tx = wallet_rpc
.transfer(
HashMap::from([(addr, Amount::ONE_XMR)]),
monero_rpc::TransferPriority::Default,
TransferOptions::default(),
)
.await
.unwrap();
let tx_hash: [u8; 32] = tx.tx_hash.0.try_into().unwrap();
// unlock it
runner::mine_until_unlocked(&daemon_rpc, &wallet_rpc_addr.to_string(), tx_hash).await;
// create the scanner
let mut scanner = Scanner::from_view(view_pair, Some(HashSet::new()));
if let AddressSpec::Subaddress(index) = spec {
scanner.register_subaddress(index);
}
// retrieve it and confirm
let tx = daemon_rpc.get_transaction(tx_hash).await.unwrap();
let output = scanner.scan_transaction(&tx).not_locked().swap_remove(0);
match spec {
AddressSpec::Subaddress(index) => assert_eq!(output.metadata.subaddress, Some(index)),
AddressSpec::Integrated(payment_id) => {
assert_eq!(output.metadata.payment_id, payment_id);
assert_eq!(output.metadata.subaddress, None);
}
AddressSpec::Standard | AddressSpec::Featured { .. } => {
assert_eq!(output.metadata.subaddress, None)
}
}
assert_eq!(output.commitment().amount, 1000000000000);
}
async_sequential!(
async fn receipt_of_wallet_rpc_tx_standard() {
from_wallet_rpc_to_self(AddressSpec::Standard).await;
}
async fn receipt_of_wallet_rpc_tx_subaddress() {
from_wallet_rpc_to_self(AddressSpec::Subaddress(SubaddressIndex::new(0, 1).unwrap())).await;
}
async fn receipt_of_wallet_rpc_tx_integrated() {
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
from_wallet_rpc_to_self(AddressSpec::Integrated(payment_id)).await;
}
);
test!(
send_to_wallet_rpc_standard,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, wallet_rpc_addr) = initialize_rpcs().await;
// add destination
builder.add_payment(
MoneroAddress::from_str(Network::Mainnet, &wallet_rpc_addr.to_string()).unwrap(),
1000000,
);
(builder.build().unwrap(), (wallet_rpc,))
},
|_, tx: Transaction, _, data: (WalletClient,)| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: 0 });
},
),
);
test!(
send_to_wallet_rpc_subaddress,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, _) = initialize_rpcs().await;
// make the addr
let (subaddress, index) = wallet_rpc.create_address(0, None).await.unwrap();
builder.add_payment(
MoneroAddress::from_str(Network::Mainnet, &subaddress.to_string()).unwrap(),
1000000,
);
(builder.build().unwrap(), (wallet_rpc, index))
},
|_, tx: Transaction, _, data: (WalletClient, u32)| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: data.1 });
// Make sure only one R was included in TX extra
assert!(Extra::read::<&[u8]>(&mut tx.prefix.extra.as_ref())
.unwrap()
.keys()
.unwrap()
.1
.is_none());
},
),
);
test!(
send_to_wallet_rpc_integrated,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, _) = initialize_rpcs().await;
// make the addr
let mut payment_id = [0u8; 8];
OsRng.fill_bytes(&mut payment_id);
let addr = make_integrated_address(payment_id).await;
builder.add_payment(MoneroAddress::from_str(Network::Mainnet, &addr).unwrap(), 1000000);
(builder.build().unwrap(), (wallet_rpc, payment_id))
},
|_, tx: Transaction, _, data: (WalletClient, [u8; 8])| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: 0 });
assert_eq!(transfer.payment_id.0, PaymentId::from_slice(&data.1));
},
),
);
test!(
send_to_wallet_rpc_with_arb_data,
(
|_, mut builder: Builder, _| async move {
// initialize rpc
let (wallet_rpc, _, wallet_rpc_addr) = initialize_rpcs().await;
// add destination
builder.add_payment(
MoneroAddress::from_str(Network::Mainnet, &wallet_rpc_addr.to_string()).unwrap(),
1000000,
);
// Make 2 data that is the full 255 bytes
for _ in 0 .. 2 {
// Subtract 1 since we prefix data with 127
let data = vec![b'a'; MAX_TX_EXTRA_NONCE_SIZE - 1];
builder.add_data(data).unwrap();
}
(builder.build().unwrap(), (wallet_rpc,))
},
|_, tx: Transaction, _, data: (WalletClient,)| async move {
// confirm receipt
data.0.refresh(None).await.unwrap();
let transfer =
data.0.get_transfer(Hash::from_slice(&tx.hash()), None).await.unwrap().unwrap();
assert_eq!(transfer.amount.as_pico(), 1000000);
assert_eq!(transfer.subaddr_index, Index { major: 0, minor: 0 });
},
),
);

13
common/db/Cargo.toml Normal file
View File

@@ -0,0 +1,13 @@
[package]
name = "serai-db"
version = "0.1.0"
description = "A simple database trait and backends for it"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/db"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = []
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]

21
common/db/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

102
common/db/src/lib.rs Normal file
View File

@@ -0,0 +1,102 @@
use core::fmt::Debug;
extern crate alloc;
use alloc::sync::Arc;
use std::{
sync::RwLock,
collections::{HashSet, HashMap},
};
/// An object implementing get.
pub trait Get: Send + Sync + Debug {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>>;
}
/// An atomic database operation.
#[must_use]
pub trait DbTxn: Send + Sync + Debug + Get {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>);
fn del(&mut self, key: impl AsRef<[u8]>);
fn commit(self);
}
/// A database supporting atomic operations.
pub trait Db: 'static + Send + Sync + Clone + Debug + Get {
type Transaction<'a>: DbTxn;
fn key(db_dst: &'static [u8], item_dst: &'static [u8], key: impl AsRef<[u8]>) -> Vec<u8> {
let db_len = u8::try_from(db_dst.len()).unwrap();
let dst_len = u8::try_from(item_dst.len()).unwrap();
[[db_len].as_ref(), db_dst, [dst_len].as_ref(), item_dst, key.as_ref()].concat()
}
fn txn(&mut self) -> Self::Transaction<'_>;
}
/// An atomic operation for the in-memory databae.
#[must_use]
#[derive(PartialEq, Eq, Debug)]
pub struct MemDbTxn<'a>(&'a MemDb, HashMap<Vec<u8>, Vec<u8>>, HashSet<Vec<u8>>);
impl<'a> Get for MemDbTxn<'a> {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
if self.2.contains(key.as_ref()) {
return None;
}
self.1.get(key.as_ref()).cloned().or_else(|| self.0 .0.read().unwrap().get(key.as_ref()).cloned())
}
}
impl<'a> DbTxn for MemDbTxn<'a> {
fn put(&mut self, key: impl AsRef<[u8]>, value: impl AsRef<[u8]>) {
self.2.remove(key.as_ref());
self.1.insert(key.as_ref().to_vec(), value.as_ref().to_vec());
}
fn del(&mut self, key: impl AsRef<[u8]>) {
self.1.remove(key.as_ref());
self.2.insert(key.as_ref().to_vec());
}
fn commit(mut self) {
let mut db = self.0 .0.write().unwrap();
for (key, value) in self.1.drain() {
db.insert(key, value);
}
for key in self.2 {
db.remove(&key);
}
}
}
/// An in-memory database.
#[derive(Clone, Debug)]
pub struct MemDb(Arc<RwLock<HashMap<Vec<u8>, Vec<u8>>>>);
impl PartialEq for MemDb {
fn eq(&self, other: &Self) -> bool {
*self.0.read().unwrap() == *other.0.read().unwrap()
}
}
impl Eq for MemDb {}
impl Default for MemDb {
fn default() -> Self {
Self(Arc::new(RwLock::new(HashMap::new())))
}
}
impl MemDb {
/// Create a new in-memory database.
pub fn new() -> Self {
Self::default()
}
}
impl Get for MemDb {
fn get(&self, key: impl AsRef<[u8]>) -> Option<Vec<u8>> {
self.0.read().unwrap().get(key.as_ref()).cloned()
}
}
impl Db for MemDb {
type Transaction<'a> = MemDbTxn<'a>;
fn txn(&mut self) -> MemDbTxn<'_> {
MemDbTxn(self, HashMap::new(), HashSet::new())
}
}
// TODO: Also bind RocksDB

View File

@@ -0,0 +1,21 @@
[package]
name = "std-shims"
version = "0.1.0"
description = "A series of std shims to make alloc more feasible"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/common/std-shims"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
keywords = ["nostd", "no_std", "alloc", "io"]
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
spin = "0.9"
hashbrown = "0.14"
[features]
std = []
default = ["std"]

21
common/std-shims/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

Some files were not shown because too many files have changed in this diff Show More