906 Commits

Author SHA1 Message Date
Luke Parker
aa666afc08 Don't clear cache within a batch build
A caller *can* call batch from a threaded environment and still trigger this at
this time. I'm unsure that use case exists/matters.

If GITHUB_CI is set, build in two batches to try and avoid storage limits.
2023-11-27 03:25:25 -05:00
Luke Parker
9da1d714b3 Fixes to name handling 2023-11-27 02:00:16 -05:00
Luke Parker
292263b21e Simultaenously build Docker images used in tests 2023-11-27 01:27:04 -05:00
Luke Parker
571195bfda Resolve #360 (#456)
* Remove NetworkId from processor-messages

Because intent binds to the sender/receiver, it's not needed for intent.

The processor knows what the network is.

The coordinator knows which to use because it's sending this message to the
processor for that network.

Also removes the unused zeroize.

* ProcessorMessage::Completed use Session instead of key

* Move SubstrateSignId to Session

* Finish replacing key with session
2023-11-26 12:14:23 -05:00
Luke Parker
b79cf8abde Move message-queue to a fully binary representation (#454)
* Move message-queue to a fully binary representation

Additionally adds a timeout to the message queue test.

* coordinator clippy

* Remove contention for the message-queue socket by using per-request sockets

* clippy
2023-11-26 11:22:18 -05:00
Luke Parker
c6c74684c9 Increase timeouts in processor tests
For some reason, these constantly failed for me while waiting for the key pair
to confirm. This adds a sleep during the mining process, to ensure blocks
actually have time between them, and mines several more blocks to handle the
median code recently added.
2023-11-25 04:09:07 -05:00
Luke Parker
de14687a0d Fix the processor's Monero time monotonicity
Monero doesn't assert the time increases with each block, solely that it
doesn't decrease. Now, the block number is added to the time to ensure it
increases.
2023-11-25 04:07:31 -05:00
Luke Parker
d60e007126 Add a binaries feature to the processor to reduce dependencies when used as a lib
processor isn't intended to be used as a library, yet serai-processor-tests
does pull it in as a lib. This caused serai-processor-tests to need to compile
rocksdb, which added multiple minutes to the compilation time.
2023-11-25 04:04:52 -05:00
Luke Parker
b296be8515 Replace bincode with borsh (#452)
* Add SignalsConfig to chain_spec

* Correct multiexp feature flagging for rand_core std

* Remove bincode for borsh

Replaces a non-canonical encoding with a canonical encoding which additionally
should be faster.

Also fixes an issue where we used bincode in transcripts where it cannot be
trusted.

This ended up fixing a myriad of other bugs observed, unfortunately.
Accordingly, it either has to be merged or the bug fixes from it must be ported
to a new PR.

* Make serde optional, minimize usage

* Make borsh an optional dependency of substrate/ crates

* Remove unused dependencies

* Use [u8; 64] where possible in the processor messages

* Correct borsh feature flagging
2023-11-25 04:01:11 -05:00
Luke Parker
6b2876351e Add file meant for prior commit 2023-11-24 21:41:59 -05:00
Luke Parker
0ea90d054d Enable the signals pallet to halt networks' publication of Batchs
Relevant to #394.

Prevents hand-over due to hand-over occurring via a `Batch` publication.

Expects a new protocol to restore functionality (after a retirement of the
current protocol).
2023-11-24 19:56:57 -05:00
Luke Parker
eb1d00aa55 Update TX format in coordinator test 2023-11-23 11:45:00 -05:00
Luke Parker
372149c2cc Various simplifications re: Serai transactions
Removes PairSigner for the pair directly.

Resets spec_version to 1.

Defines the extrinsic without its length prefix, only prefixing during publish.
2023-11-23 00:02:01 -05:00
Luke Parker
c6cf33e370 Install wasm toolchain before ADDing files to docker images
Enables caching the wasm toolchain.
2023-11-22 18:07:58 -05:00
Luke Parker
f58478ad87 Add hex as a dependency to serai-client 2023-11-22 18:06:10 -05:00
Luke Parker
88a1726399 Fixes for prior commit 2023-11-22 16:24:50 -05:00
Luke Parker
08e6669403 Replace substrate/client's use of Payload with usage of RuntimeCall
Gains explicit typing.
2023-11-22 11:23:04 -05:00
akildemir
fcfdadc791 Integrate session pallet into validator-sets pallet (#440)
* remove pallet-session

* Store key shares in InSet

* integrate grandpa to vs-pallet

* integrate pallet babe

* remove pallet-session & authority discovery from runtime

* update the grandpa pallet path

* cargo update grandpa

* cargo update substrate

* Misc tweaks

Sets validators for BABE/GRANDPA in chain_spec, per Akil's realization that was
the missing piece.

* fix pr comments

* bug fix & tidy up

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-22 06:22:46 -05:00
Luke Parker
07c657306b Remove duplicated genesis presence in Tributary Scanner DB keys
This wasted 32-bytes per every single entry in the DB (ignoring de-duplication
possible by the DB layer).
2023-11-22 04:43:49 -05:00
David Bell
9df8c9476e Convert tributary to use create_db macro (#448)
* chore: convert tributary to use create_db macro

* chore: fix fmt

* chore: break long line
2023-11-22 04:17:51 -05:00
econsta
9ab2a2cfe0 processor/db.rs macro implimentation (#437)
* processor/db.rs macro implimentation

* ran clippy and fmt

* incorporated recommendations

* used empty uple instead of [u8; 0]

* ran fmt
2023-11-22 03:58:26 -05:00
Luke Parker
a315040aee if-watch 3.2.0 2023-11-22 01:24:17 -05:00
Luke Parker
1822e31142 Grow the yamux buffers to exceed the maximum message size 2023-11-21 02:01:41 -05:00
Luke Parker
6efc313d76 Add/update msrv for common/*, crypto/*, coins/*, and substrate/*
This includes all published crates.
2023-11-21 01:19:40 -05:00
Luke Parker
4b85d4b03b Rename SubstrateSigner to BatchSigner 2023-11-20 22:21:52 -05:00
econsta
05d8c32be8 Multisig db (#444)
* implement db macro for processor/multisigs/db.rs

* ran fmt

* cargo +nightly fmt

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 21:40:54 -05:00
econsta
8634d90b6b implement db macro for processor/signer.rs (#438)
* implement db macro for processor/signer.rs

* Use ()

* CompletedDb -> CompletionsDb

* () -> &()

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 03:03:45 -05:00
econsta
aa9daea6b0 implement db macro for processor/substrate_signer (#439)
* implement db macro for processor/substrate_signer

* Use ()

* Correct AttemptDb usage of ()

* () -> &()

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 03:03:35 -05:00
Luke Parker
942873f99d Non-intrusive cargo update 2023-11-20 02:31:47 -05:00
Luke Parker
7c9b581723 Remove dtolnay's rust-toolchain action (#442)
* Remove dtolnay's rust-toolchain action

I believe our rust-toolchain.toml handles its use case exactly.

I don't believe this'll work, as it'd require rustup install a cargo stub
before any toolchain is installed, yet I want to confirm it doesn't.

* Place quotes around nightly toolchain version

* Put toolchain before options to resolve what appears to be a bug in rustup's help strings

* Add wasm32-unkknown-unknown to clippy workflow
2023-11-20 02:31:22 -05:00
Luke Parker
b37a0db538 Move where the RISC-V toolchain is installed during the no-std workflow 2023-11-19 22:45:51 -05:00
Luke Parker
797604ad73 Replace usage of io::Error::new(io::ErrorKind::Other, with io::Error::other
Newly possible with Rust 1.74.
2023-11-19 18:31:37 -05:00
Luke Parker
05b975dff9 Rust 1.74
Adds a Rust toolchain file to be less disruptive to developers who don't keep
their toolchain synchronized (by now having rustup automatically synchronize).

Hopefully helps resolve how +nightly clippy may pass for the coordinator, yet
building would fail due to stable's (hopefully prior?) failure to model some
async functions re: Send/Sync.

Also adds rust-src as a component in preparation of
https://github.com/paritytech/polkadot-sdk/pull/2217
2023-11-19 17:47:46 -05:00
Luke Parker
74a8df4c7b Add a new primitive of a DB-backed channel
The coordinator already had one of these, albeit implemented much worse than
the one now properly introduced. It had to either be sending or receiving,
whereas the new one can do both at the same time.

This replaces said instance and enables pleasant patterns when implementing the
processor/coordinator.
2023-11-19 02:05:01 -05:00
Luke Parker
bd14db9f76 Version pin foundry
Fixes #116 and #441.
2023-11-19 01:21:43 -05:00
Luke Parker
25c02c1311 Ban Serai from setting keys
They were already banned form publishing `Batch`s, yet the ability to set keys
enabled changing how certain functionality in the coordinator operated.
Removing this malleability ensures operation as expected.
2023-11-18 21:28:16 -05:00
Luke Parker
b018fc432c Add deletion to create_db 2023-11-18 20:56:58 -05:00
Luke Parker
6e4ecbc90c Support trailing commas in create_db 2023-11-18 20:54:37 -05:00
Luke Parker
be48dcc4a4 Use multiple LibP2P topics
We still need a peering protocol... Hopefully, we can read peers off of the
Substrate node's DHT.
2023-11-18 20:37:55 -05:00
Luke Parker
25066437da Attempt to resolve #434
Worsens the coordinator tests' ensuring of the validity of the cosigning
protocol, yet should actually properly model it.
2023-11-18 00:12:36 -05:00
Luke Parker
db49a63c2b Bump zeroize to appease cargo deny 2023-11-16 14:18:00 -05:00
Luke Parker
ee50f584aa Add saner log statements to the coordinator
Disables trace on every single P2P message.

Logs a short-form of each transaction.
2023-11-16 13:37:39 -05:00
Luke Parker
14f3f330db Remove deadlock in cosign_evaluator 2023-11-16 13:32:15 -05:00
Luke Parker
30a77d863f Include non-JSON response from Monero node in error 2023-11-15 22:57:37 -05:00
Luke Parker
a03a1edbff Remove needless transposition in coordinator 2023-11-15 22:49:58 -05:00
Luke Parker
c03afbe03e Downgrade RustCrypto packages which violate Rust's interpretation of semver 2023-11-15 21:24:52 -05:00
Luke Parker
af611a39bf Restore runtime feature to hyper 2023-11-15 20:40:24 -05:00
Luke Parker
0d080e6d34 Remove unused dependency from dex-pallet 2023-11-15 20:24:33 -05:00
Luke Parker
369af0fab5 \#339 addendum 2023-11-15 20:23:19 -05:00
Luke Parker
d25e3d86a2 Make TLS an optional feature of simple-request
Removes 14 crates from the tree when compiling the message-queue client.

Also performs a non-intrusive cargo update.
2023-11-15 17:24:11 -05:00
Luke Parker
96f1d26f7a Add a cosigning protocol to ensure finalizations are unique (#433)
* Add a function to deterministically decide which Serai blocks should be co-signed

Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.

* Get all active tributaries we're in at a specific block

* Add and route CosignSubstrateBlock, a new provided TX

* Split queued cosigns per network

* Rename BatchSignId to SubstrateSignId

* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it

* Handle the CosignSubstrateBlock provided TX

* Revert substrate_signer.rs to develop (and patch to still work)

Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.

* Route cosigning through the processor

* Add note to rename SubstrateSigner post-PR

I don't want to do so now in order to preserve the diff's clarity.

* Implement cosign evaluation into the coordinator

* Get tests to compile

* Bug fixes, mark blocks without cosigners available as cosigned

* Correct the ID Batch preprocesses are saved under, add log statements

* Create a dedicated function to handle cosigns

* Correct the flow around Batch verification/queueing

Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).

Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.

When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.

This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.

Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.

* Working full-stack tests

After the last commit, this only required extending a timeout.

* Replace "co-sign" with "cosign" to make finding text easier

* Update the coordinator tests to support cosigning

* Inline prior_batch calculation to prevent panic on rotation

Noticed when doing a final review of the branch.
2023-11-15 16:57:21 -05:00
Luke Parker
79e4cce2f6 Explicitly set features in modular-frost 2023-11-13 05:31:47 -05:00
Luke Parker
0c341e3546 Fix no-std builds 2023-11-13 05:19:53 -05:00
Luke Parker
9f0790fb83 Remove RecommendedTranscript from DKG MuSig
Resolves #391.

Given this code already wasn't modular/composable, this should be overall
equivalent regarding functionality and security. It's much less opinionated
though and has fewer dependencies.
2023-11-13 05:11:40 -05:00
Luke Parker
bb8e034e68 Test historic start times in tendermint-machine
Closes https://github.com/serai-dex/serai/issues/342.

Under ideal network conditions, this is fine. While I won't claim ideal network
conditions will occur IRL, b0fcdd3367 has the
Tributary rebroadcast messages and should brute-force its way into a
functioning system.
2023-11-13 00:43:35 -05:00
Luke Parker
3f7bdaa64b Make the output distribution cache only available under a feature
Enables a mode with reduced memory usage *and* increased safety given current
unsafety of the cache.

Relevant to https://github.com/serai-dex/serai/issues/415.
2023-11-13 00:24:54 -05:00
Luke Parker
351436a258 Dockerfile Parts (#428)
* De-duplicate Dockerfiles by using a bash file to concatenate common parts

Resolves #375.

Dockerfiles are still committed to the repo to avoid a dependency on bash.

* Add a CI job to confirm the committed dockerfiles are the currently generated ones

* Create dedicated Dockerfiles per processor network

Ensures the compromising of network-specific dependencies doesn't lead to a
compromise of the build process for all processors.

* Dockerfile corrections

* Correct call to build processor Docker image in tests/processor
2023-11-12 23:55:15 -05:00
David Bell
c328e5ea68 Convert coordinator/tributary/nonce_decider to use create_db macro (#423)
* chore: convert nonce_deicer to use create_db macro

* Restore pub NonceDecider

* Remove extraneous comma

I forgot to run git commit --amend on the prior commit :/

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 12:04:34 -05:00
Boog900
995734c960 Monero: add more legacy verify functions (#383)
* Add v1 ring sig verifying

* allow calculating signature hash for v1 txs

* add unreduced scalar type with recovery

I have added this type for borromen sigs, the ee field can be a normal
scalar as in the verify function the ee
field is checked against a reduced scalar mean for it to verify as
correct ee must be reduced

* change block major/ minor versions to u8

this matches Monero

I have also changed a couple varint functions to accept the `VarInt`
trait

* expose `serialize_hashable` on `Block`

* add back MLSAG verifying functions

I still need to revert the commit removing support for >1 input MLSAG FULL

This adds a new rct type to separate Full and simple rct

* add back support for multiple inputs for RCT FULL

* comment `non_adjacent_form` function

also added `#[allow(clippy::needless_range_loop)]` around a loop as without a re-write satisfying clippy without it will make the function worse.

* Improve Mlsag verifying API

* fix rebase errors

* revert the changes on `reserialize_chain`
plus other misc changes

* fix no-std

* Reduce the amount of rpc calls needed for `get_block_by_number`.
This function was causing me problems, every now and then a node would return a block with a different number than requested.

* change `serialize_hashable` to give the POW hashing blob.

Monero calculates the POW hash and the block hash using *slightly* different blobs :/

* make ring_signatures public and add length check when verifying.

* Misc improvements and bug fixes

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 10:18:18 -05:00
Luke Parker
54f1929078 Route blame between Processor and Coordinator (#427)
* Have processor report errors during the DKG to the coordinator

* Add RemoveParticipant, InvalidDkgShare to coordinator

* Route DKG blame around coordinator

* Allow public construction of AdditionalBlameMachine

Necessary for upcoming work on handling DKG blame in the processor and
coordinator.

Additionally fixes a publicly reachable panic when commitments parsed with one
ThresholdParams are used in a machine using another set of ThresholdParams.

Renames InvalidProofOfKnowledge to InvalidCommitments.

* Remove unused error from dleq

* Implement support for VerifyBlame in the processor

* Have coordinator send the processor share message relevant to Blame

* Remove desync between processors reporting InvalidShare and ones reporting GeneratedKeyPair

* Route blame on sign between processor and coordinator

Doesn't yet act on it in coordinator.

* Move txn usage as needed for stable Rust to build

* Correct InvalidDkgShare serialization
2023-11-12 07:24:41 -05:00
akildemir
d015ee96a3 Dex improvements (#422)
* remove dex traits&balance types

* remove liq tokens pallet in favor of coins-pallet instance

* fix tests & benchmarks

* remove liquidity tokens trait

* fix CI

* fix pr comments

* Slight renamings

* Add burn_with_instruction as a negative to LiquidityTokens CallFilter

* Remove use of One, Zero, Saturating taits in dex pallet

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 06:37:31 -05:00
Luke Parker
a43815f101 Restore Foundry to a test dependency via direct usage of solc 2023-11-12 04:34:45 -05:00
Luke Parker
7f1732c8c0 cargo update to snow 0.9.4 2023-11-12 00:40:32 -05:00
Luke Parker
ed2445390f Replace post-detection of if a Plan is forwarded by noting if it's from the scanner 2023-11-09 14:54:38 -05:00
Luke Parker
52a0c56016 Rename Network::address to Network::external_address
Improves clarity since we now have 4 addresses.
2023-11-09 14:31:46 -05:00
Luke Parker
42e8f2c8d8 Add OutputType::Forwarded to ensure a user's transfer in isn't misclassified
If a user transferred in without an InInstruction, and the amount exactly
matched a forwarded output, the user's output would fulfill the
forwarding. Then the forwarded output would come along, have no InInstruction,
and be refunded (to the prior multisig) when the user should've been refunded.

Adding this new address type resolves such concerns.
2023-11-09 14:24:13 -05:00
Luke Parker
b51204a4eb Replace usage of ethers-signers with 11 lines of ECDSA code 2023-11-09 13:22:43 -05:00
Luke Parker
ec51fa233a Document an accepted false positive 2023-11-09 12:41:15 -05:00
Luke Parker
ce4091695f Document a choice of variable name 2023-11-09 12:38:06 -05:00
Luke Parker
24919cfc54 Resolve race condition regarding when forwarded output is set
The higher-level scanner code in multisigs/mod.rs now creates a series of plans
with limited context. These include forwarding and refunding plans, moving all
handling of forwarding flags on the scanner's clock and therefore safe.

Also simplifies the refunding a decent bit.
2023-11-09 12:37:07 -05:00
Luke Parker
bf41009c5a Document critical race condition due to two distinct clocks operating over the same data 2023-11-09 08:41:22 -05:00
Luke Parker
e8e9e212df Move additional functions which retry until success into Network trait 2023-11-09 07:16:15 -05:00
Luke Parker
19187d2c30 Implement calculation of monotonic network times for Bitcoin and Monero 2023-11-09 07:02:52 -05:00
Luke Parker
43ae6794db Remove invalid TODOs from processor signers 2023-11-09 03:53:30 -05:00
Luke Parker
978134a9d1 Remove events from SubstrateSigner
Same vibes as prior commit.
2023-11-09 01:56:09 -05:00
Luke Parker
2eb155753a Remove the Signer events pseudo-channel for a returned message
Also replaces SignerEvent with usage of ProcessorMessage directly.
2023-11-09 01:26:30 -05:00
Luke Parker
7d72e224f0 Remove Output::amount and move Payment from Amount to Balance
This code is still largely designed around the idea a payment for a network is
fungible with any other, which isn't true. This starts moving past that.

Asserts are added to ensure the integrity of coin to the scheduler (which is
now per key per coin, not per key alone) and in Bitcoin/Monero prepare_send.
2023-11-08 23:33:25 -05:00
Luke Parker
ffedba7a05 Update processor tests to refund logic 2023-11-08 21:59:11 -05:00
Luke Parker
06e627a562 Support refunds as possible for invalidly received outputs on Serai 2023-11-08 11:26:28 -05:00
Luke Parker
11f66c741d Remove ethers-middleware 2023-11-08 08:19:12 -05:00
Luke Parker
a0a2ef22e4 Remove ethers-solc
ethers-solc was used for a type (now manually specified) and to call out to
solc. Since Foundry was already a documented dependency, a call to it now
handles building.

Removing this single crate removes a total of 17 crates from our dependency
tree. While these may still be around due to Foundry, they at least may not
be.

Further work to remove the requirement on Foundry for solc alone would be
appreciated.
2023-11-08 06:25:35 -05:00
Luke Parker
5e290a29d9 Remove frame-benchmarking-cli
Not currently used, notably increases our dependency tree.

I wouldn't remove it if we planned to use it. From my understanding, all
benchmarking will be per pallet, voiding our need to have this for the node.
2023-11-08 05:59:56 -05:00
Luke Parker
a688350f44 Have processor's Network::new sleep until booted, not panic 2023-11-08 03:21:28 -05:00
Luke Parker
bc07e14b1e Remove async_recursion for a for loop 2023-11-07 23:07:26 -05:00
Luke Parker
e1c07d89e0 Retry RPC requests once on error
I don't like blindly retrying in the Monero library. The amount of errors,
which weren't present with reqwest (well, the error rate was the same, yet due
to a distinct bug this code fixed), demand we do *something* though.

The trace log shows hyper is erroring with 0 bytes of the response read. My
guess is it's somehow a closed connection? A connection pool would detect this
and have created a new connection (as this does, except once finding out
there's an issue).

While we should be able to detect this with `ready()`, we do call ready and it
claims no error. We also can successfully write which makes this... a mess.
Hopefully, it either actually works as intended, yet it at least requires two
consecutive errors which should be much less frequent.
2023-11-07 22:55:29 -05:00
Luke Parker
56fd11ab8d Use a single long-lived RPC connection when authenticated
The prior system spawned a new connection per request to enable parallelism,
yet kept hitting hyper::IncompleteMessages I couldn't track down. This
attempts to resolve those by a long-lived socket.

Halves the amount of requests per-authenticated RPC call, and accordingly is
likely still better overall.

I don't believe this is resolved yet but this is still worth pushing.
2023-11-07 17:42:19 -05:00
Luke Parker
c03fb6c71b Add dedicated BatchSignId 2023-11-06 20:06:36 -05:00
Luke Parker
96f94966b7 Restore accidentally deleted function 2023-11-06 18:37:18 -05:00
Luke Parker
b65ba17007 Fix accumulated bugs 2023-11-06 18:12:53 -05:00
Luke Parker
c9003874ad Remove ethers mono-crate
Reduces size of ethereum-serai and gives us clarity on what's used.

Next should be rmeoving the ethers-provided signing code.
2023-11-06 17:30:50 -05:00
Luke Parker
205bec36e5 try_from -> from 2023-11-06 17:00:09 -05:00
Luke Parker
df8b455d54 Don't generate RuntimeCall::System
Completely unused yet would be permanently part of our protocol if left alone.
2023-11-06 16:59:30 -05:00
Luke Parker
84a0bcad51 Move monero-serai to simple-request
Deduplicates code across the entire repo, letting us make improvements in a
single place.
2023-11-06 11:45:33 -05:00
Luke Parker
b680bb532b Don't default to basic-auth if it's enabled, yet require it to be specified 2023-11-06 10:42:01 -05:00
Luke Parker
b9983bf133 Replace reqwest with simple-request
reqwest was replaced with hyper and hyper-rustls within monero-serai due to
reqwest *solely* offering a connection pool API. In the process, it was
demonstrated how quickly we can achieve equivalent functionality to reqwest for
our use cases with a fraction of the code.

This adds our own reqwest alternative to the tree, applying it to both
bitcoin-serai and message-queue. By doing so, bitcoin-serai decreases its tree
by 21 packages and the processor by 18. Cargo.lock decreases by 8 dependencies,
solely adding simple-request. Notably removed is openssl-sys and openssl.

One noted decrease functionality is the requirement on the system having
installed CA certificates. While we could fallback to the rustls certificates
if the system doesn't have any, that's blocked by
https://github.com/rustls/hyper-rustls/pulls/228.
2023-11-06 09:47:12 -05:00
Luke Parker
cddb44ae3f Bitcoin tweaks + cargo update
Removes bitcoin-serai's usage of sha2 for bitcoin-hashes. While sha2 is still
in play due to modular-frost (more specifically, due to ciphersuite), this
offers a bit more performance (assuming equivalency between sha2 and
bitcoin-hashes' impl) due to removing a static for a const.

Makes secp256k1 a dev dependency for bitcoin-serai. While secp256k1 is still
pulled in via bitcoin, it's hopefully slightly better to compile now and makes
usage of secp256k1 an implementation detail of bitcoin (letting it change it
freely).

Also offers slightly more efficient signing as we don't decode to a signature
just to re-encode for the transaction.

Removes a 20s sleep for a check every second, up to 20 times, for reduced test
times in the processor.
2023-11-06 07:38:36 -05:00
hinto.janai
bd3272a9f2 replace lazy_static! with once_cell::sync::Lazy 2023-11-06 05:31:46 -05:00
Luke Parker
de41be6e26 Slash on SignCompleted for unrecognized plan 2023-11-05 13:42:01 -05:00
Luke Parker
b8ac8e697b Add missing crate to tests, remove no longer present RUSTSEC ignore 2023-11-05 12:11:08 -05:00
akildemir
899a9604e1 Add Dex pallet (#407)
* Move pallet-asset-conversion

* update licensing

* initial integration

* Integrate Currency & Assets types

* integrate liquidity tokens

* fmt

* integrate dex pallet tests

* fmt

* compilation error fixes

* integrate dex benchmarks

* fmt

* cargo clippy

* replace all occurrences of "asset" with "coin"

* add the actual add liq/swap logic to in-instructions

* add client side & tests

* fix deny

* Lint and changes

- Renames InInstruction::AddLiquidity to InInstruction::SwapAndAddLiquidity
- Makes create_pool an internal function
- Makes dex-pallet exclusively create pools against a native coin
- Removes various fees
- Adds new crates to GH workflow

* Fix rebase artifacts

* Correct other rebase artifact

* Correct CI specification for liquidity-tokens

* Correct primitives' test to the standardized pallet account scheme

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 12:02:34 -05:00
David Bell
facb5817c4 Database Macro (#408)
* db_macro

* wip: converted prcessor/key_gen to use create_db macro

* wip: converted prcessor/key_gen to use create_db macro

* wip: formatting

* fix: added no_run to doc

* fix: documentation example had extra parenths

* fix: ignore doc test entirely

* Corrections from rebasing

* Misc lint

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 09:47:24 -05:00
akildemir
97fedf65d0 add reasons to slash evidence (#414)
* add reasons to slash evidence

* fix CI failing

* Remove unnecessary clones

.encode() takes &self

* InvalidVr to InvalidValidRound

* Unrelated to this PR: Clarify reasoning/potentials behind dropping evidence

* Clarify prevotes in SlashEvidence test

* Replace use of read_to_end

* Restore decode_signed_message

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 00:04:41 -04:00
Luke Parker
257323c1e5 log::debug all Monero RPC errors 2023-11-05 00:02:58 -04:00
Luke Parker
deecd77aec Don't drop the request sender before we finish reading the response
It *looks like* hyper will drop the connection once its request sender is
dropped, regardless of if the last request hasn't had its response completed.
This attempts to resolve some spurious connection errors.
2023-11-04 22:55:47 -04:00
Luke Parker
360b264a0f Remove unused dependencies 2023-11-04 19:26:38 -04:00
Luke Parker
e05b77d830 Support multiple key shares per validator (#416)
* Update the coordinator to give key shares based on weight, not based on existence

Participants are now identified by their starting index. While this compiles,
the following is unimplemented:

1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.

* Add a fn to the DkgConfirmer to convert `i` values as needed

Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.

* Have the Tributary DB track participation by shares, not by count

* Prevent a node from obtaining 34% of the maximum amount of key shares

This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.

* Correct the mechanism to detect if sufficient accumulation has occured

It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.

* Finish updating the coordinator to handle a multiple key share per validator environment

* Adjust stategy re: preventing noce reuse in DKG Confirmer

* Add TODOs regarding dropped transactions, add possible TODO fix

* Update tests/coordinator

This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.

* Update processor key_gen to handle generating multiple key shares at once

* Update SubstrateSigner

* Update signer, clippy

* Update processor tests

* Update processor docker tests
2023-11-04 19:26:13 -04:00
Luke Parker
5970a455d0 Replace crc dependency with our own crc implementation
It's ~30 lines to remove 2 crates in our tree.
2023-11-03 06:44:23 -04:00
Luke Parker
4c9e3b085b Add a String to Monero ConnectionErrors debugging the issue
We're reaching this in CI so there must be some issue present.
2023-11-03 05:45:33 -04:00
github-actions[bot]
a2089c61fb November 2023 - Rust Nightly Update (#413)
* Update nightly

* Replace .get(0) with .first()

* allow new clippy lint

---------

Co-authored-by: GitHub Actions <>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-03 05:28:07 -04:00
Luke Parker
ae449535ff Correct no-std builds 2023-10-31 07:55:25 -04:00
Luke Parker
05dc474cb3 Correct std feature-flagging
If a crate has std set, it should enable std for all dependencies in order to
let them properly select which algorithms to use. Some crates fallback to
slower/worse algorithms on no-std.

Also more aggressively sets default-features = false leading to a *10%*
reduction in the amount of crates coordinator builds.
2023-10-31 07:44:02 -04:00
Luke Parker
34bcb9eb01 bitcoin 0.31 2023-10-31 03:47:45 -04:00
Luke Parker
2958f196fc ed448 with generic-array 1.0 2023-10-29 09:07:14 -04:00
Luke Parker
92cebcd911 Add ca-certificates to processor Docker image 2023-10-28 04:06:00 -04:00
Luke Parker
a3278dfb31 Add ca-certificates as a dependency to the CI 2023-10-28 00:45:00 -04:00
Luke Parker
f02db19670 cargo update
Removes a pair of duplicated dependencies.
2023-10-27 23:14:54 -04:00
Luke Parker
652c878f54 Note slight malleability in batch verification 2023-10-27 23:08:31 -04:00
Luke Parker
da01de08c9 Restore logger init in processor tests 2023-10-27 23:08:06 -04:00
Luke Parker
052ef39a25 Replace reqwest with hyper in monero-serai
Ensures a connection pool isn't used behind-the-scenes, as necessitated by
authenticated connections.
2023-10-27 23:05:47 -04:00
Luke Parker
87fdc8ce35 Use Build Dependencies over Test Dependencies where we can 2023-10-26 15:15:29 -04:00
Luke Parker
86ff0ae71b No longer run processor tests again when testing against 0.17.3.2
Even though the intent was to test against 0.17.3.2, and a Monero 0.17.3.2 node
was running, the processor now uses docker which will always use 0.18.
Accordingly, while the intent was valid, it was pointless.

This is unfortunate, as testing against 0.17 helped protect against edge cases.
The infra to preserve their tests isn't worth the benefit we'd gain from said
tests however.
2023-10-26 14:29:51 -04:00
Luke Parker
3069138475 Fix handling of Monero daemon connections when using an authenticated RPC
The lack of locking the connection when making an authenticated request, which
is actually two sequential requests, risked another caller making a request in
between, invalidating the state.

Now, only unauthenticated connections share a connection object.
2023-10-26 12:45:39 -04:00
Luke Parker
f22aedc007 Add tweaks meant for prior commit 2023-10-24 04:34:47 -04:00
Luke Parker
7be20c451d cargo update, resolving two yanked crates (ahash, lalrpop) 2023-10-24 04:03:51 -04:00
Luke Parker
0198d4cc46 Add a TributaryState struct with higher-level DB logic 2023-10-24 03:55:13 -04:00
Luke Parker
7c10873cd5 Tweak how the Monero node is run for the processor tests
Disables the unused zmq RPC.

Removes authentication which seems to be unstable as hell when under load
(see #351).

No longer use Network::Isolated as it's not needed here (the Monero nodes run
with `--offline`).
2023-10-23 07:56:55 -04:00
Luke Parker
08180cc563 Resolve #405 2023-10-23 07:56:43 -04:00
Luke Parker
c4bdbdde11 dockertest 0.4 (#406)
* Updates to modern dockertest

* More updates to latest dockertest

* Update Cargo.lock to dockertest with handle restored

* clippy coordinator tests

* clippy full-stack tests

* Remove kayabaNerve branch for official repo's latest commit hash

* Update serai-client, remove reliance on the existence of a handle fn

* Don't use the hex encoding of unique_id in dockertests

Gets our hostnames just below 64 bytes, resolving test failures on at least
Debian-based systems.

* Use Network::Isolated for all dockertest instances

* Correct error from prior commit's edits
2023-10-23 06:59:38 -04:00
Luke Parker
0d23964762 Resolve #335 2023-10-23 05:10:13 -04:00
Luke Parker
fbf51e53ec Resolve #327
Also runs `cargo update` and moves where we install the wasm toolchain in the
Dockerfile for better caching properties.
2023-10-23 00:45:00 -04:00
Luke Parker
fd1826cca9 Implement a fee on every input to prevent prior described economic attacks
Completes #297.
2023-10-22 21:31:13 -04:00
Luke Parker
f561fa9ba1 Fix a bug which would attempt to create a transaction with N::MAX_INPUTS + 1 2023-10-22 19:15:52 -04:00
Luke Parker
f4fc539e14 Remove constants inlined into bitcoin-serai for bitcoin::policy-provided constants 2023-10-22 18:08:36 -04:00
Luke Parker
0fff5391a8 Improve the reasoning for why the Bitcoin DUST constant is set as it is
Also halves the minimum fee policy, which still may be 2x-4x higher than
necessary due to API limitations within bitcoin-serai (which we can fix as it's
within our scope).
2023-10-22 18:06:44 -04:00
Luke Parker
a71a789912 Monero median_fee fn 2023-10-22 17:43:21 -04:00
Luke Parker
83c41eccd4 Bitcoin Dust constant justification, median_fee fn 2023-10-22 07:03:33 -04:00
Luke Parker
e5113c333e Move LastBatchBlock, LastBatch sets by their checks 2023-10-22 05:49:19 -04:00
Luke Parker
6068978676 Use a transaction layer when executing each InInstruction 2023-10-22 05:46:03 -04:00
Luke Parker
e7e30150f0 Don't let the Serai set publish batches 2023-10-22 05:38:44 -04:00
Luke Parker
55fe27f41a Don't allow the Bitcoin set to mint sriETH 2023-10-22 05:37:23 -04:00
Luke Parker
d66a7ee43e Remove the staking pallet for validator-sets alone
The staking pallet is an indirection which offered no practical benefit yet
increased the overhead of every call.
2023-10-22 04:00:42 -04:00
Luke Parker
a702d65c3d validator-sets pallet event expansion 2023-10-22 03:28:42 -04:00
Luke Parker
52eb68677a pallet-staking event 2023-10-22 03:19:01 -04:00
Luke Parker
d29d19bdfe Ensure pallet-session is initialized after staking 2023-10-22 02:52:34 -04:00
Luke Parker
c3fdb9d9df Use a comprehensive call filter in the runtime 2023-10-21 21:35:14 -04:00
Luke Parker
46e1f85085 Remove unnecessary ENV flags from Bitcoin Dockerfile 2023-10-21 20:12:41 -04:00
Luke Parker
1bff2a0447 Add signals pallet
Resolves #353

Implements code such that:

- 80% of validators (by stake) must be in favor of a signal for the network to
  be
- 80% of networks (by stake) must be in favor of a signal for it to be locked
  in
- After a signal has been locked in for two weeks, the network halts

The intention is to:

1) Not allow validators to unilaterally declare new consensus rules.

No method of declaring new consensus rules is provided by this pallet. Solely a
way to deprecate the current rules, with a signaled for successor. All nodes
must then individually decide whether or not to download and run a new node
which has new rules, and if so, which rules.

2) Not place blobs on chain.

Even if they'd be reproducible, it's just a lot of data to chuck on the
blockchain.
2023-10-21 20:06:55 -04:00
Luke Parker
b66203ae3f Update Bitcoin Docker image to 25.1
Also decreases the Bitcoin dummy fee.
2023-10-20 18:52:43 -04:00
Luke Parker
43a182fc4c Reduce dummy fee used by Monero 2023-10-20 17:57:02 -04:00
Luke Parker
8ead7a2581 Add missing std feature flags to a couple Substrate dependencies 2023-10-20 17:53:07 -04:00
Luke Parker
3797679755 Track total allocated stake in validator-sets pallet 2023-10-20 16:58:44 -04:00
Luke Parker
c056b751fe Remove Fee from the Network API
The only benefit to having it would be the ability to cache it across
prepare_send, which can be done internally to the Network.
2023-10-20 16:12:28 -04:00
Luke Parker
5977121c48 Don't mutate Plans when signing
This is achieved by not using the Plan struct anymore, yet rather its
decomposition. While less ergonomic, it meets our wants re: safety.
2023-10-20 10:56:18 -04:00
Luke Parker
7b6181ecdb Remove Plan ID non-determinism leading Monero to have distinct TX fees
Monero would select decoys with a new RNG seed, which may have used more bytes,
increasing the fee.

There's a few comments here.

1) Non-determinism wasn't removed via distinguishing the edits. It was done by
   removing part of the transcript. A TODO exists to improve this.
2) Distinct TX fees is a test failure, not an issue in prod *unless* the distinct
   fee is greater. So long as the distinct fee is lesser, it's fine.
3) Removing outputs is expected to only decrease fees.
2023-10-20 08:11:42 -04:00
EmmanuelChthonic
f976bc86ac fix key fn not being called in getter 2023-10-20 07:34:19 -04:00
Luke Parker
441bf62e11 Simplify amortize_fee, correct scheduler's amortizing of branch fees 2023-10-20 05:40:16 -04:00
Luke Parker
4852dcaab7 Move common code from prepare_send into Network trait 2023-10-20 04:42:08 -04:00
Luke Parker
d6bc1c1ea3 Explicitly only adjust operating costs when plan.change.is_some()
The existing code should've mostly handled this fine. Only a single edge case
(TX fee reduction on no-change Plans) would cause an improper increase in
operating costs.
2023-10-19 23:16:04 -04:00
Luke Parker
7b2dec63ce Don't scan outputs which are dust, track dust change as operating costs
Fixes #299.
2023-10-19 08:02:10 -04:00
Luke Parker
d833254b84 Other clippy fixes 2023-10-19 08:01:15 -04:00
Luke Parker
a1b2bdf0a2 clippy fixes 2023-10-19 06:30:58 -04:00
Luke Parker
43841f95fc cargo fmt 2023-10-19 06:24:52 -04:00
akildemir
fdfce9e207 Coins pallet (#399)
* initial implementation

* add function to get a balance of an account

* add support for multiple coins

* rename pallet to "coins-pallet"

* replace balances, assets and tokens pallet with coins pallet in runtime

* add total supply info

* update client side for new Coins pallet

* handle fees

* bug fixes

* Update FeeAccount test

* Fmt

* fix pr comments

* remove extraneous Imbalance type

* Minor tweaks

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-19 06:22:21 -04:00
Luke Parker
3255c0ace5 Track and amortize operating costs to ensure solvency
Implements most of #297 to the point I'm fine closing it. The solution
implemented is distinct than originally designed, yet much simpler.

Since we have a fully-linear view of created transactions, we don't have to
per-output track operating costs incurred by that output. We can track it
across the entire Serai system, without hooking into the Eventuality system.

Also updates documentation.
2023-10-19 03:13:44 -04:00
Luke Parker
057c3b7cf1 libp2p 0.52.4
Adds several packages into tree, deprecates an API we use. This commit does
update accordingly.

While this may not be preferable, it is inevitable.
2023-10-19 00:27:21 -04:00
Luke Parker
a29c410caf cargo update parking_lot_core, adding redox_syscall 2023-10-19 00:04:13 -04:00
Luke Parker
a6f40978cb cargo update regex, adding regex-syntax 2023-10-19 00:02:55 -04:00
Luke Parker
bdf6afc144 cargo update time, adding dep powerfmt 2023-10-19 00:02:27 -04:00
Luke Parker
a362b0a451 Non-intrusive cargo update
Updates yanked wasm-opt, updates tracing due to use-after-free, updates
everything else which doesn't grow the tree.
2023-10-18 21:45:49 -04:00
Luke Parker
041ed46171 Correct panic possible when jumping to a round with Precommit(None) 2023-10-18 16:46:14 -04:00
Luke Parker
9accddb2d7 cargo update libp2p-identity
Finally removes curve25519-dalek 3.2.
2023-10-16 15:07:39 -04:00
Luke Parker
e06e4edc8e Modernize protocol documentation 2023-10-16 05:11:31 -04:00
Luke Parker
e4d59eeeca Remove hashbrown from validator-sets pallet 2023-10-16 01:47:15 -04:00
Luke Parker
1f5b1bb514 Update to lazy_static 1.5.0
Requires a git dependency due to
https://github.com/rust-lang-nursery/lazy-static.rs/issues/201.

Justified due to bumping the internally spin to 0.9 (containing a bug fix for
0.9 in general (https://github.com/rust-lang-nursery/lazy-static.rs/pull/199)),
surpassing 0.7 (offering performance
(https://github.com/rust-lang-nursery/lazy-static.rs/pull/182)).
2023-10-16 01:39:15 -04:00
Luke Parker
1a3b6005c6 Various cargo updates 2023-10-15 23:57:45 -04:00
Luke Parker
7dc1a24bce Move DkgConfirmer to its own file, document 2023-10-15 01:39:56 -04:00
Luke Parker
3483f7fa73 Call fatal_slash where easy and appropriate 2023-10-15 00:32:51 -04:00
Luke Parker
a300a1029a Load/save first_preprocess with RecognizedIdType
Enables their IDs to have conflicts across each other.
2023-10-14 21:58:10 -04:00
Luke Parker
7409d0b3cf Rename add_active_tributary for clarity 2023-10-14 21:53:38 -04:00
Luke Parker
19e90b28b0 Have Tributary's add_transaction return a proper error
Modifies main.rs to properly handle the returned error.
2023-10-14 21:50:11 -04:00
Luke Parker
584943d1e9 Modify SubstrateBlockAck as needed
Replaces plan IDs with key + ID, letting the coordinator determine the sessions
for the plans.

Properly scopes which plan IDs are set on which tributaries, and ensures we
have the necessary tributaries at time of handling.
2023-10-14 20:37:54 -04:00
Luke Parker
62e1d63f47 Abort the P2P meta task when dropped
This should cause full cleanup of all Tributary async tasks, since the machine
already cleans itself up on drop.
2023-10-14 20:08:51 -04:00
Luke Parker
e4adaa8947 Further tweaks re: retiry 2023-10-14 19:55:14 -04:00
Luke Parker
3b3fdd104b Most of coordinator Tributary retiry
Adds Event::SetRetired to validator-sets.

Emit TributaryRetired.

Replaces is_active_set, which made multiple network requests, with
is_retired_tributary, a DB read.

Performs most of the removals necessary upon TributaryRetired.

Still needs to clean up the actual Tributary/Tendermint tasks.
2023-10-14 16:47:25 -04:00
Luke Parker
5897efd7c7 Clean out create_new_tributary
It made sense when the task was in main.rs. Now that it isn't, it's a pointless
indirection.
2023-10-14 16:09:24 -04:00
Luke Parker
863a7842ca Have every node respond to Heartbeat so they don't download the messages over the net 2023-10-14 15:27:40 -04:00
Luke Parker
f414735be5 Redo new_tributary from being over ActiveTributary to TributaryEvent
TributaryEvent also allows broadcasting a retiry event.
2023-10-14 15:27:39 -04:00
Luke Parker
5c5c097da9 Tweaks for processor to work with the new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
7d4e8b59db Update dockertests to new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
e3e9939eaf Tidy Serai use in coordinator to new API 2023-10-14 15:26:36 -04:00
Luke Parker
530fba51dd Update coordinator to new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
cb61c9052a Reorganize serai-client
Instead of functions taking a block hash, has a scope to a block hash before
functions can be called.

Separates functions by pallets.
2023-10-14 15:26:36 -04:00
Luke Parker
96cc5d0157 Remove a TODO re: an unhandled race condition 2023-10-14 00:41:07 -04:00
Luke Parker
7275a95907 Break handle_processor_messages out to handle_processor_message, move a helper fn to substrate 2023-10-13 23:36:07 -04:00
Luke Parker
80e5ca9328 Move heartbeat_tributaries and handle_p2p to p2p.rs 2023-10-13 22:40:11 -04:00
Luke Parker
67951c4971 Localize scan_substrate as substrate::scan_task 2023-10-13 22:31:54 -04:00
Luke Parker
4143fe9f47 Move scan_tributaries, shrinking coordinator's main.rs 2023-10-13 22:30:13 -04:00
Luke Parker
a73b19e2b8 Tweak coordinator test timing 2023-10-13 21:46:26 -04:00
Luke Parker
97c328e5fb Check tributaries are active before declaring them relevant 2023-10-13 21:46:17 -04:00
Luke Parker
96c397caa0 Add content-based deduplication to the tests' shimmed P2P
The tests have recently had their timing stilted, causing failures. The tests
are... fine. They're fragile, as obvious, yet they're logical. The simplest fix
is to unstilt their timing rather to make them non-fragile.

The recent change, which presumably caused said stilting, was the the
rebroadcasting added. This de-duplication prevents most of the impact of
rebroadcasting. While there's still the async task, and the lock acquisition on
attempt to rebroadcast, this hopefully is enough.
2023-10-13 19:47:58 -04:00
akildemir
d5c6ed1a03 Improve provided handling (#381)
* fix typos

* remove tributary sleeping

* handle not locally provided txs

* use topic number instead of waiting list

* Clean-up, fixes

1) Uses a single TXN in provided
2) Doesn't continue on non-local provided inside verify_block, skipping further
   execution of checks
3) Upon local provision of already on-chain TX, compares

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-13 19:45:47 -04:00
Luke Parker
f6e8bc3352 Alternate handover batch TOCTOU fix (#397)
* Revert "Correct the prior documented TOCTOU"

This reverts commit d50fe87801.

* Correct the prior documented TOCTOU

d50fe87801 edited the challenge for the Batch to
fix it. This won't produce Batch n+1 until Batch n is successfully published
and verified. It's an alternative strategy able to be reviewed, with a much
smaller impact to scope.
2023-10-13 12:14:59 -04:00
Luke Parker
7d0d1dc382 Replace solc-select with svm-rs in CI and docs
svm-rs is already in tree as a library, so we may as well include it as a bin
instead of also pulling in solc-select.
2023-10-13 08:56:25 -04:00
Luke Parker
d50fe87801 Correct the prior documented TOCTOU
Now, if a malicious validator set publishes a malicious `Batch` at the last
moment, it'll cause all future `Batch`s signed by the next validator set to
require a bool being set (yet they never will set it).

This will prevent the handover.

The only overhead is having two distinct `batch_message` calls on-chain.
2023-10-13 04:41:01 -04:00
Luke Parker
e6aa9df428 Document TOCTOU allowing malicious validator set to trigger a handover to an honest set 2023-10-13 04:14:36 -04:00
Luke Parker
02edfd2935 Verify all Batchs published by the prior set
The new set publishing a `Batch` completes the handover protocol. The new set
should only publish a `Batch` once it believes the old set has completed all of
its on-external-chain activity, marking it honest and finite.

With the handover comes the acceptance of liability, hence the requirement for
all of the on-Serai-chain activity also needing verification. While most
activity would be verified in-real-time (upon ::Batch messages), the new set
will now explicitly verify the complete set of `Batch`s before beginning its
preprocess for its own `Batch` (the one accepting the handover).
2023-10-13 04:12:21 -04:00
Luke Parker
9aeece5bf6 Give one weight per key share to validators in Tributary 2023-10-13 02:29:22 -04:00
Luke Parker
bb84f7cf1d Correct ValidatorSets genesis 2023-10-13 01:42:26 -04:00
Luke Parker
bb25baf3bc Add logic to amortize excess key shares, correcting is_bft 2023-10-13 01:04:41 -04:00
Luke Parker
013a0cddfc MAX_VALIDATORS_PER_SET -> MAX_KEY_SHARES_PER_SET 2023-10-13 00:50:07 -04:00
Luke Parker
ed7300b406 Explicitly provide a pre_dispatch which calls validate_unsigned
pre_dispatch is guaranteed by documentation to be called and persisted.
validate_unsigned is not, though the provided pre_dispatch does by default call
validate_unsigned. By explicitly providing our own pre_dispatch, we accomplish
the bounds we require and expect, only being invalidated on Substrate
redefining their API.

We should still test this, yet since we call retire_session in
validate_unsigned, any test of rotation will test it's being properly called.
2023-10-13 00:31:23 -04:00
Luke Parker
88b5efda99 cargo fmt 2023-10-13 00:12:10 -04:00
Luke Parker
0712e6f107 Localize stake into networks
Sets a stake requirement of 100k for Serai and Monero, as Serai doesn't have
stake requirements and Monero isn't expected to see as much
volume/institutional support as Bitcoin/Ethereum.
2023-10-13 00:04:30 -04:00
Luke Parker
6a4c57e86f Define an array of all NetworkIds in serai_primitives 2023-10-12 23:59:21 -04:00
Luke Parker
b7746aa71d Don't allow (de)allocations which remove fault tolerance 2023-10-12 23:47:00 -04:00
Luke Parker
8dd41ee798 Allow immediate deallocation if the decrease doesn't cross a key-share threshold 2023-10-12 23:06:20 -04:00
Luke Parker
9a1d10f4ea Error if deallocation would remove fault tolerance 2023-10-12 23:05:29 -04:00
Luke Parker
6587590986 Grab up to 150 key shares of validators, not 150 validators 2023-10-12 22:44:10 -04:00
Luke Parker
b0fcdd3367 Regularly rebroadcast consensus messages to ensure presence even if dropped from the P2P layer
Attempts to fix #342, #382.
2023-10-12 22:14:42 -04:00
Luke Parker
15edea1389 Use an inner task to spawn Tributarys to minimize latency 2023-10-12 21:55:25 -04:00
Luke Parker
1d9e2efc33 Don't unwrap result of call which makes network requests 2023-10-12 18:49:49 -04:00
Luke Parker
f25f5cd368 Add a sleep statement to Batch publication errors to prevent log flooding/node hammering 2023-10-12 18:39:46 -04:00
Luke Parker
f847ac7077 Update to if-watch 3.1.0
Has a delta of -4 packages in tree.

Offers a potential to no longer have two sets of windows in-tree once packages
using 0.48 update to 0.51.
2023-10-12 18:37:32 -04:00
Luke Parker
29fcf6be4d Support immediate deallocations for non-active validators 2023-10-12 00:51:18 -04:00
Luke Parker
108e2b57d9 Add claim_deallocation to the staking pallet 2023-10-12 00:26:35 -04:00
Luke Parker
3da5577950 Only allow deallocations after the next set after the validator's inclusion starts, plus a one session cooldown period
Part of #394.
2023-10-11 23:42:15 -04:00
Luke Parker
f692047b8b Rename validators to select_validators 2023-10-11 17:23:09 -04:00
Luke Parker
2401266374 Replace mutate with get + set
I'm legitimately unsure why mutate doesn't work. Reading the impls, it should...
2023-10-11 02:11:44 -04:00
Luke Parker
ed90d1752a Minimal CI Attempts (#393)
* Add quotes around globs

* Don't remove current kernel

* Remove nvm due to nvme, go -> golang
2023-10-11 02:11:34 -04:00
Luke Parker
3261fde853 Remove homebrew from packages to remove 2023-10-11 01:19:04 -04:00
Luke Parker
7492adc473 Attempt to further minimize size of GH CI 2023-10-11 01:17:50 -04:00
Luke Parker
04f9a1fa31 Correct handling of InSet's keys 2023-10-11 01:05:48 -04:00
Luke Parker
13cbc99149 Properly define the on-chain handover protocol
The new key publishing `Batch`s is more than sufficient.

Also uses the correct key to verify the published `Batch`s authenticity.
2023-10-10 23:55:59 -04:00
Luke Parker
1a0b4198ba Correct the check for if we still need to set keys
The prior check had an edge case where once keys were pruned, it'd believe the
keys needed to be set ad infinitum.
2023-10-10 22:53:15 -04:00
Luke Parker
22371a6585 Also reinstall python3-pip 2023-10-10 22:34:19 -04:00
Luke Parker
b2d6a85ac0 Still remove sqlite3 to cause the majority of uninstalls 2023-10-10 22:31:07 -04:00
Luke Parker
44ca5e6520 Manually reinstall python3 after removing most packages 2023-10-10 22:29:25 -04:00
Luke Parker
83b7146e1a Specify shell when removing unused packages 2023-10-10 22:21:03 -04:00
Luke Parker
f193b896c1 Update to monero 0.18.3.1 2023-10-10 21:34:22 -04:00
Luke Parker
985795e99d Remove unused packages as part of build dependencies
The reproducible runtime test failed due to running out of space. If we have
multiple tests failing due to out of space, and all of our tests have these
unused, it makes sense just to always so uninstall.

Also extends the time limit of reproducible-runtime, as 2h has been hit a few
times before.
2023-10-10 21:09:45 -04:00
Luke Parker
9bf8c92325 Correct the coordinator tests
They assumed processor 0 had keys `i = 1`. Under the new validator-set code,
the first key is the one with the highest amount, In case of tie, the key (or
as of the last commit, a Blake hash) decides order.

This commit kludges in a mapping from processor index to assigned key index, no
longer assuming its value.
2023-10-10 17:12:47 -04:00
Luke Parker
ab5af57dae Fix a pair of bugs in SortedAllocations and add further documentation
We took with `amount`, not `prior`, allowing multiple presences in
SortedAllocations.

While spam is limited by the amount of nibbles in the amount, the key provides
a much larger space to abuse. Inserting a cryptographic hash prevents its use
for abuse.
2023-10-10 16:50:48 -04:00
akildemir
98190b7b83 Staking pallet (#373)
* initial staking pallet

* add staking pallet to runtime

* support session rotation for serai

* optimizations & cleaning

* fix deny

* add serai network to initial networks

* a few tweaks & comments

* fix some pr comments

* Rewrite validator-sets with logarithmic algorithms

Uses the fact the underlying DB is sorted to achieve sorting of potential
validators by stake.

Removes release of deallocated stake for now.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-10 06:53:24 -04:00
akildemir
2f45bba2d4 fix tendermint invalid commit (#392)
* fix conflicting commit msg signing vs verifying

* fmt
2023-10-10 06:32:04 -04:00
Luke Parker
30d0bad175 Add extra assert to coordinator 2023-10-09 23:38:39 -04:00
Steven Chang
b8abc1e3cc README.md: Add links to Reddit, Telegram and Website 2023-10-07 13:32:52 -04:00
Luke Parker
b2ed2e961c Correct rust version used in CI/orchestration 2023-10-05 18:24:21 -04:00
Luke Parker
9cdca1d3d6 Use the newly stabilized div_ceil
Sets a msrv of 1.73.0.
2023-10-05 14:28:03 -04:00
Luke Parker
4ee65ed243 Update nightly
Supersedes #387.
2023-10-03 01:34:15 -04:00
Luke Parker
aa59f53ead Correct the coordinator tests
They weren't updated with the past couple of commits.
2023-09-29 04:35:02 -04:00
Luke Parker
bd5491dfd5 Simply Coordinator/Processors::send by accepting impl Into *Message 2023-09-29 04:19:59 -04:00
Luke Parker
0eff3d9453 Add Batch messages from processor, verify Batchs published on-chain
Renames Update to SignedBatch.

Checks Batch equality via a hash of the InInstructions. That prevents needing
to keep the Batch in node state or TX introspect.
2023-09-29 03:51:01 -04:00
Luke Parker
0be567ff69 Remove a misplaced copy of a README which has been around for who knows how long 2023-09-29 00:33:14 -04:00
Luke Parker
83b3a5c31c Document how receiving a Processor message does indeed make its Tributary relevant 2023-09-28 20:09:17 -04:00
Luke Parker
4d1212ec65 Update the processor's tests re: Batch SignId key
key used to be empty. As part of implementing support for multisig rotation
into the coordinator, it was easiest to set the key to the substrate key. While
the coordinator and full stack tests were updated, the processor tests weren't.
This does that.
2023-09-27 23:32:29 -04:00
Luke Parker
7e27315207 Attempt to reduce full-stack CI disk usage 2023-09-27 21:00:32 -04:00
Luke Parker
aa1faefe33 Correct Message Queue log statements now that queues are per from-to pairs 2023-09-27 21:00:07 -04:00
Luke Parker
7d738a3677 Start moving Coordinator to a multi-Tributary model
Prior, we only supported a single Tributary per network, and spawned a task to
handled Processor messages per Tributary. Now, we handle Processor messages per
network, yet we still only supported a single Tributary in that handling
function.

Now, when we handle a message, we load the Tributary which is relevant. Once we
know it, we ensure we have it (preventing race conditions), and then proceed.

We do need work to check if we should have a Tributary, or if we're not
participating. We also need to check if a Tributary has been retired, meaning
we shouldn't handle any transactions related to them, and to clean up retired
Tributaries.
2023-09-27 20:49:02 -04:00
Luke Parker
4a32f22418 Use a proper transcript for Tributary scanner topics 2023-09-27 13:33:25 -04:00
Luke Parker
01a4b9e694 Remove unused_variables 2023-09-27 13:00:04 -04:00
Luke Parker
3b01d3039b Remove unused clippy lints from coordinator 2023-09-27 12:42:25 -04:00
Luke Parker
40b7bc59d0 Use dedicated Queues for each from-to pair
Prevents one Processor's message from halting the entire pipeline.
2023-09-27 12:20:57 -04:00
Luke Parker
269db1c4be Remove the "expected" next ID
It's an unnecessary extra layer better handled locally.
2023-09-27 11:13:55 -04:00
Luke Parker
90318d7214 Remove unnecessary TODO 2023-09-27 00:50:57 -04:00
Luke Parker
64d370ac11 Make publish_signed_transaction safe for out of order publications
This is a possibility under the new deterministic nonce scheme.

While there is a concern of us never creating a transaction with a nonce,
blocking everything, we should always create transactions. We'll always publish
preprocesses, and while we'll only publish shares if everyone else does, we
only allocate for shares once everyone else does.
2023-09-27 00:44:31 -04:00
Luke Parker
db8dc1e864 Spawn a task for Heartbeat responses, preventing it from holding up P2P handling 2023-09-27 00:10:37 -04:00
Luke Parker
086458d041 Txn for handling a processor message
handle_processor_messages function added to remove a very large block of nested
code.

MainDb cleaned to never be instantiated.
2023-09-27 00:00:31 -04:00
Luke Parker
2e0f8138e2 Update the coordinator to not handle a processor message multiple times 2023-09-26 23:28:05 -04:00
Luke Parker
32a9a33226 Adjust sync test timeout to resolve infreuqent failure
This isn't an unacceptable timeout. It matches a prior timeout. I'm unsure why
it's now needed to be extended though. My best guess is the test runtime is
single threaded and there's now new overhead in the task management (or perhaps
higher latency now that messages per-tributary is serialized).
2023-09-26 17:28:41 -04:00
Luke Parker
2508633de9 Add a next block notification system to Tributary
Also adds a loop missing from the prior commit.
2023-09-25 23:20:51 -04:00
Luke Parker
7312428a44 P2P task per Tributary, not per message 2023-09-25 22:58:40 -04:00
Luke Parker
e1801b57c9 Dedicated tasks per-Processor in coordinator
This isn't meaningful yet, as we still have serialized reading messages from
Processors, yet is a step closer.
2023-09-25 22:38:29 -04:00
Luke Parker
60491a091f Improve handling of tasks in coordinator, one per Tributary scanner 2023-09-25 20:33:14 -04:00
Luke Parker
9f3840d1cf Localize Tributary HashMaps, offering flexibility and removing contention 2023-09-25 19:28:53 -04:00
Luke Parker
7120bddc6f Move where we trigger DKGs for safety reasons 2023-09-25 18:27:16 -04:00
Luke Parker
77f7794452 Remove lazy_static for proper use of channels 2023-09-25 18:23:52 -04:00
Luke Parker
62a1a45135 Move where we create the readers to prevent only creating readers for present-at-boot tributaries
Also renames publish_transaction to publish_signed_transaction.
2023-09-25 18:07:29 -04:00
Luke Parker
0440e60645 Move heartbeat_tributaries from Tributary to TributaryReader
Reduces contention.
2023-09-25 17:15:38 -04:00
Luke Parker
4babf898d7 Implement deterministic nonces for Tributary transactions 2023-09-25 15:42:39 -04:00
Luke Parker
ca69f97fef Add support for multiple multisigs to the processor (#377)
* Design and document a multisig rotation flow

* Make Scanner::eventualities a HashMap so it's per-key

* Don't drop eventualities, always follow through on them

Technical improvements made along the way.

* Start creating an isolate object to manage multisigs, which doesn't require being a signer

Removes key from SubstrateBlock.

* Move Scanner/Scheduler under multisigs

* Move Batch construction into MultisigManager

* Clarify "should" in Multisig Rotation docs

* Add block_number to MultisigManager, as it controls the scanner

* Move sign_plans into MultisigManager

Removes ThresholdKeys from prepare_send.

* Make SubstrateMutable an alias for MultisigManager

* Rewrite Multisig Rotation

The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.

* Add mini

mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.

This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).

* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives

Technically, the prior commit had mini prove the race condition.

The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.

The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.

In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).

The docs will be updated next, to reflect this.

* Fix Multisig Rotation

I believe this is finally good enough to be final.

1) Fixes the race condition present in the prior document, as demonstrated by
mini.

`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.

2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.

3) Removes liquidity fragmentation by tightening flow/handling of latency.

4) Several clarifications and documentation of reasoning.

5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.

* Clarify terminology in mini

Synchronizes it from my original thoughts on potential schema to the design
actually created.

* Remove most of processor's README for a reference to docs/processor

This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.

* Update scanner TODOs in line with new docs

* Correct documentation on Bitcoin::Block::time, and Block::time

* Make the scanner in MultisigManager no longer public

* Always send ConfirmKeyPair, regardless of if in-set

* Cargo.lock changes from a prior commit

* Add a policy document on defining a Canonical Chain

I accidentally committed a version of this with a few headers earlier, and this
is a proper version.

* Competent MultisigManager::new

* Update processor's comments

* Add mini to copied files

* Re-organize Scanner per multisig rotation document

* Add RUST_LOG trace targets to e2e tests

* Have the scanner wait once it gets too far ahead

Also bug fixes.

* Add activation blocks to the scanner

* Split received outputs into existing/new in MultisigManager

* Select the proper scheduler

* Schedule multisig activation as detailed in documentation

* Have the Coordinator assert if multiple `Batch`s occur within a block

While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.

While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.

* Schedule payments to the proper multisig

* Correct >= to <

* Use the new multisig's key for change on schedule

* Don't report External TXs to prior multisig once deprecated

* Forward from the old multisig to the new one at all opportunities

* Move unfulfilled payments in queue from prior to new multisig

* Create MultisigsDb, splitting it out of MainDb

Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.

The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.

* Don't check Scanner-emitted completions, trust they are completions

Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.

* Fix a possible panic in the processor

A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.

* Document why an existing TODO isn't valid

* Change when we drop payments for being to the change address

The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.

* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions

* When closing, drop External/Branch outputs which don't cause progress

* Properly decide if Change outputs should be forward or not when closing

This completes all code needed to make the old multisig have a finite lifetime.

* Commentary on forwarding schemes

* Provide a 1 block window, with liquidity fragmentation risks, due to latency

On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.

Also updates the Multisig Rotation document with the new forwarding plan.

* Implement transaction forwarding from old multisig to new multisig

Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.

* Only let Branch outputs fulfill branches

* Update TODOs

* Move the location of handling signer events to avoid a race condition

* Avoid a deadlock by using a RwLock on a single txn instead of two txns

* Move Batch ID out of the Scanner

* Increase from one block of latency on new keys activation to two

For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.

Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.

* Split MultisigManager::substrate_block into multiple functions

* Further tweaks to substrate_block

* Acquire a lock on all Scanner operations after calling ack_block

Gives time to call register_eventuality and initiate signing.

* Merge sign_plans into substrate_block

Also ensure the Scanner's lock isn't prematurely released.

* Use a HashMap to pass to-be-forwarded instructions, not the DB

* Successfully determine in ClosingExisting

* Move from 2 blocks of latency when rotating to 10 minutes

Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.

* Add note justifying measuring time in blocks when rotating

* Implement delaying of outputs received early to the new multisig per specification

* Documentation on why Branch outputs don't have the race condition concerns Change do

Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.

* Remove TODO re: sanity checking Eventualities

We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.

* Add TODO(now) for TODOs which must be done in this branch

Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.

* Correct errors in potential/future flow descriptions

* Accept having a single Plan Vec

Per the following code consuming it, there's no benefit to bifurcating it by
key.

* Only issue sign_transaction on boot for the proper signer

* Only set keys when participating in their construction

* Misc progress

Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.

Only immediately sets substrate_signer if session is 0.

On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.

* Correctly detect and set retirement block

Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.

Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).

Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.

* Remove deadlock in multisig_completed and document alternative

The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?

* Handle the final step of retirement, dropping the old key and setting new to existing

* Remove TODO about emitting a Block on every step

If we emit on NewAsChange, we lose the purpose of the NewAsChange period.

The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.

* Add TODO about potentially not emitting a Block event for the reitrement block

* Restore accidentally deleted CI file

* Pair of slight tweaks

* Add missing if statement

* Disable an assertion when testing

One of the test flows currently abuses the Scanner in a way triggering it.
2023-09-25 09:48:15 -04:00
Luke Parker
fe19e8246e cargo update
Updates past the yanked rustls-webpki, reduces tree size by one.
2023-09-24 08:31:13 -04:00
Luke Parker
c62d9b448f Use a Vec for the Monero generators, preventing its massive stack usage
The amount of stack usage did cause issues on m1 computers.
2023-09-20 04:31:16 -04:00
Luke Parker
98ab6acbd5 cargo update, removing 5 items from tree 2023-09-20 04:30:46 -04:00
Luke Parker
092f17932a Document requirement on rootless Docker 2023-09-19 12:59:04 -04:00
Luke Parker
e455332e01 Resolve #369 2023-09-19 12:55:30 -04:00
Luke Parker
a9468bf355 Pin CI from stable to 1.72.1
Enables better detection of regressions in Rust, a few of which 1.72.1 fixes.
2023-09-19 11:43:21 -04:00
Luke Parker
3d464c4736 apt update before install in workflow 2023-09-15 14:54:55 -04:00
Luke Parker
142552f024 Correct workflows with missing toolchain annotations 2023-09-15 14:36:47 -04:00
Luke Parker
e3a7ee4927 Pin to exact GH actions, preventing ACE in CI
Also updates actions.
2023-09-15 14:30:18 -04:00
Luke Parker
9eaaa7d2e8 Document H1's mismatch between the FROST preprint and IETF draft 2023-09-15 14:16:15 -04:00
Luke Parker
8adef705c3 Update wasmtime due to CVE-2023-41880 2023-09-15 14:06:39 -04:00
Luke Parker
3fd6d45b3e Use base58-monero 2, removing a git dependency 2023-09-15 13:59:29 -04:00
Luke Parker
d263413e36 Fixes for schnorrkel/dalek updates 2023-09-12 10:02:20 -04:00
Luke Parker
6f8a5d0ede Sane char_le_bits 2023-09-12 09:37:48 -04:00
Luke Parker
24bdd7ed9b Bump dalek-ff-group version
Prior commit fixed random, which could generate points outside of the prime
subgroup.
2023-09-12 09:00:42 -04:00
Luke Parker
aa724c06bc Start relying on curve25519-dalek's group feature
Removes git dependency for schnorrkel as well, now that schnorrkel has updated.
2023-09-12 08:56:30 -04:00
Luke Parker
1e6655408e cargo update
Bites the bullet on ethers 2.0.9 (now 2.0.10).
2023-09-12 07:47:03 -04:00
Luke Parker
9ab43407e1 Ignore NewSet events for Serai in the coordinator
The coordinator has nothing to do in this case.
2023-09-08 09:55:19 -04:00
Luke Parker
7ac0de3a8d Correct binding properties of Bitcoin eventuality
Eventualities need to be binding not just to a plan, yet to the execution of
the plan (the outputs). Bitcoin's Eventuality definition short-cutted this
under a honest multisig assumption, causing the following issue:

If multisig n+1 is verifying multisig n's actions, as detailed in
multi-multisig's document on multisig rotation, it'll check no outstanding
eventualities exist. If we solely bind to the plan, a malicious multisig n
could steal outbound payments yet cause the plan to be marked as successfully
completed.

By modifying the eventuality to also include the expected outputs, this is no
longer possible. Binding to the expected input is preserved in order to remain
binding to the plan (allowing two plans with the same output-set to co-exist).
2023-09-08 05:21:18 -04:00
Luke Parker
06a6cd29b0 Set nodelay on coordinator's P2P sockets 2023-09-06 22:57:33 -04:00
Luke Parker
2472ec7ba8 Don't attempt parsing truncated InInstructions 2023-09-02 17:18:04 -04:00
Luke Parker
69c3fad7ce cargo fmt 2023-09-02 16:32:42 -04:00
Luke Parker
bd9a05feef Document UTXO solvency modeling 2023-09-02 16:11:01 -04:00
Luke Parker
7d8e08d5b4 Use scale instead of bincode throughout processor-messages/processor DB
scale is canonical, bincode is not.
2023-09-02 07:54:09 -04:00
Luke Parker
f7e49e1f90 Update Rust nightly
Supersedes #368.

Adds exceptions for unwrap_or_default due to preference against Default's
ambiguity.
2023-09-02 01:24:09 -04:00
Luke Parker
cd4c3a6c88 Correct publication of Completed Tributary TXs 2023-09-02 00:50:54 -04:00
Luke Parker
fddc605c65 Define a proper Topic (Zone + ID)
Removes the inefficiency of recognized_topic for attempt returning None if the
topic isn't recognized.
2023-09-01 01:21:15 -04:00
Luke Parker
2ad6b38be9 Prefix root keys in coordinator with "coordinator" to prevent conflicts with tributary 2023-09-01 01:00:24 -04:00
Luke Parker
fda90e23c9 Reduce and clarify data flow in Tributary scanner 2023-09-01 00:59:10 -04:00
Luke Parker
3f3f6b2d0c Properly route attempt around DkgConfirmer 2023-09-01 00:16:43 -04:00
Luke Parker
fa8ff62b09 Remove sender_i from DkgShares
It was a piece of duplicated data used to achieve context-less
de)serialization. This new Vec code is a bit tricker to first read, yet overall
clean and removes a potential fault.

Saves 2 bytes from DkgShares messages.
2023-09-01 00:03:56 -04:00
Luke Parker
5113ab9ec4 Move SignCompleted to Unsigned to cause de-duplication amongst honest validators 2023-08-31 23:39:36 -04:00
Luke Parker
9b7cb688ed Have Batch contain Block and batch ID, ensuring eclipsed validators don't publish invalid shares
See prior commit message for more info.

With the plan for the batch sign ID to be just 5 bytes (potentially 4), this
does incur a +5 bytes cost compared to the ExternalBlock system *even in the
standard case*. The simplicity remains preferred at this time.
2023-08-31 23:04:39 -04:00
Luke Parker
9a5f8fc5dd Replace ExternalBlock with Batch
The initial TODO was simply to use one ExternalBlock per all batches in the
block. This would require publishing ExternalBlock after the last batch,
requiring knowing the last batch. While we could add such a pipeline, it'd
require:

1) Initial preprocesses using a distinct message from BatchPreprocess
2) An additional message sent after all BatchPreprocess are sent

Unfortunately, both would require tweaks to the SubstrateSigner which aren't
worth the complexity compared to the solution here, at least, not at this time.

While this will cause, if a Tributary is signing a block whose total batch data
exceeds 25 kB, to use multiple transactions which could be optimized out by
'better' local data pipelining, that's an extreme edge case. Given the temporal
nature of each Tributary, it's also an acceptable edge.

This does no longer achieve synchrony over external blocks accordingly. While
signed batches have synchrony, as they embed their block hash, batches being
signed don't have cryptographic synchrony on their contents. This means
validators who are eclipsed may produce invalid shares, as they sign a
different batch. This will be introduced in a follow-up commit.
2023-08-31 23:00:25 -04:00
Luke Parker
2dc35193c9 Handle batch n+1 being signed before batch n is 2023-08-31 22:09:34 -04:00
Luke Parker
9bf24480f4 Spawn an async test per P2P message to try and resolve latency issues 2023-08-31 02:35:50 -04:00
Luke Parker
3af9dc5d6f Tweak Heartbeat configuration so LibP2P can be expected to deliver messages within latency window 2023-08-31 01:33:52 -04:00
Luke Parker
148bc380fe Add assert on edge case requiring a validator with 34% and a broken invariant 2023-08-31 01:08:40 -04:00
Luke Parker
8dad62f300 Fix panic when post-verifying Precommits in log
Notes an edge case which enables invalid commit production.
2023-08-31 01:02:57 -04:00
Luke Parker
1e79de87e8 Remove contention between LibP2p spawned task and consumers via channels 2023-08-30 23:31:09 -04:00
Luke Parker
2f57a69cb6 Define BLOCK_PROCESSING_TIME, LATENCY_TIME in ms
Updates Tributary values to allow 999ms for block processing (from 2s) and
1667ms for latency (up from 1s).

The intent is to resolve #365. I don't know if this will, but it increases the
chances of success and these values should be fine in prod since Tributary is a
post-execution chain (making block procesisng time minimal).

Does embed the dagger of N::block_time() panicking if the block time in ms
doesn't cleanly divide by 1000.
2023-08-30 22:58:42 -04:00
Luke Parker
493a222421 Use a timeout in case the JSON-RPC notifications have unexpected behavior 2023-08-30 17:57:33 -04:00
Luke Parker
d5a19eca8c Add a notification system for finalizations to serai-client, use in coordinator 2023-08-30 17:25:04 -04:00
Luke Parker
e9fca37181 Intent based de-duplication in MessageQueue 2023-08-29 17:05:01 -04:00
Luke Parker
83c25eff03 Remove no longer necessary async from monero SignatableTransaction::sign 2023-08-29 16:20:21 -04:00
Luke Parker
285422f71a Add a full-stack mint and burn test for Bitcoin and Monero
Fixes where ram_scanned is updated in processor. The prior version, while safe,
would redo massive amounts of work during periods of inactivity. It also hit an
undocumented invariant where get_eventuality_completions assumes new blocks,
yet redone work wouldn't have new blocks.

Modifies Monero's generate_blocks to return the hashes of the generated blocks.
2023-08-28 21:17:22 -04:00
Luke Parker
1838c37ecf Full stack test framework 2023-08-27 18:37:12 -04:00
Luke Parker
a3649b2062 Move where we trigger KeyGen to avoid a race condition
We only expect processor messages when we have the relevant Tributary. We
queued Tributary creation, yet then kicked off processor messages. We need to
wait until the Tributary is actually created to kick off processor messages.
2023-08-27 18:23:45 -04:00
Luke Parker
1bd14163a0 Log all output to console when running in CI
Attempts to debug https://github.com/serai-dex/serai/actions/runs/5992819682/job/16252622433,
which seems to be a legitimate test failure.
2023-08-27 17:51:36 -04:00
Luke Parker
89a6ee9290 Silence warning when building in release 2023-08-27 15:39:09 -04:00
Luke Parker
a66994aade Use FCMP implementation of BP+ in monero-serai (#344)
* Add in an implementation of BP+ based off the paper, intended for clarity and review

This was done as part of my work on FCMPs from Monero, and is copied from https://github.com/kayabaNerve/full-chain-membership-proofs

* Remove crate structure of BP+

* Remove arithmetic circuit code

* Remove AC/VC generators code

* Remove generator transcript

Monero uses non-transcripted static generators.

* Further trimming of generators

* Remove the single range proof

It's unused by Monero and accordingly unhelpful.

* Work on getting BP+ to compile in its new env

* Correct BP+ folder name

* Further tweaks to get closer to compiling

* Remove the ScalarMatrix file

It's only used for AC proofs

* Compiles, with tests passing

* Lock BP+ to Ed25519 instead of the generic Ciphersuite

* Resolve most warnings in BP+

* Make existing bulletproofs test easier to read

* Further strip generators

* Swap G/H as Monero did

* Replace RangeCommitment with Commitment

* Hard-code BP+ h to Ed25519's generator

* Use pub(crate) for BP+, not pub

* Replace initial_transcript with hash_plus

* Rename hash_plus to initial_transcript

* Finish integrating the FCMP BP+ impl

* Move BP+ folder

* Correct no-std support

* Rename "long_n" to eta

* Add note on non-prime order dfg points
2023-08-27 15:33:17 -04:00
Luke Parker
34ffd2fa76 Run coordinator e2e tests one at a time due to usage of mdns causing cross-talk 2023-08-27 05:40:36 -04:00
Luke Parker
775353f8cd Add testing for Completed to coordinator e2e tests
Two commits ago stubbed in support, sufficient for a honest-validator
environment.
2023-08-27 05:34:04 -04:00
Luke Parker
c9b2490ab9 Tweak tributary_test to handle a one-block variance
Prior to the previous commit, whatever async scheduling occurred caused them to
all have the same tip. Now, some are one block ahead of others. This adds
tolerance for that, as it's an acceptable variance, so long as it's solely one
block.
2023-08-27 05:28:36 -04:00
Luke Parker
2db53d5434 Use &self for handle_message and sync_block in Tributary
They used &mut self to prevent execution at the same time. This uses a lock
over the channel to achieve the same security, without requiring a lock over
the entire tributary.

This fixes post-provided Provided transactions. sync_block waited for the TX to
be provided, yet it never would as sync_block held a mutable reference over the
entire Tributary, preventing any other read/write operations of any scope.

A timeout increased (bc2f23f72b) due to this bug
not being identified has been decreased back, thankfully.

Also shims in basic support for Completed, which was the WIP before this bug
was identified.
2023-08-27 05:07:11 -04:00
Luke Parker
6268bbd7c8 Replace "connected" with "external" in Processor doc 2023-08-27 01:57:17 -04:00
Luke Parker
bc2f23f72b Further increase GH timeout in response to https://github.com/serai-dex/serai/actions/runs/5988202109/job/16243154650
This should be egregious unless the GitHub CI is so inperformant it's breaking
Tendermint consensus's synchrony expectations, which likely points to our own
code being unviable.

This solely serves as an immediate fix to the problem, not a justification of
the unevaluated performance.
2023-08-27 01:26:42 -04:00
Luke Parker
72337b17f5 Test the Sign flow across the Coordinators 2023-08-26 21:45:53 -04:00
Luke Parker
36b193992f Correct handling of known signers in batch test
Putting a signer first doesn't work because signers can only publish once a
supermajority sync. Now, the code uses an excluded signer (instead of an
included signer) to determine signing set.

Further simplifications are available. Also adds accurate documentation on
latency/sleep reasoning.
2023-08-26 21:38:58 -04:00
Luke Parker
37b6de9c0c Add SRI balanace/transfer functions to serai_client 2023-08-26 21:37:54 -04:00
Luke Parker
22f3c9e58f Stop trying to publish a Batch if another node does 2023-08-26 21:37:21 -04:00
Luke Parker
9adefa4c2c Add code to handle a race condition around first_preprocess 2023-08-26 21:35:43 -04:00
Luke Parker
c245bcdc9b Extend Batch test with SubstrateBlock/arb Batch signing 2023-08-25 21:37:36 -04:00
Luke Parker
f249e20028 Route KeyPair so Tributary can construct SignId as needed 2023-08-25 18:37:22 -04:00
Luke Parker
3745f8b6af Increase GitHub CI timeouts for the coordinator tests 2023-08-25 14:09:44 -04:00
Luke Parker
ba46aa76f0 Update libp2p
Adds 17 new crates, which I'm extremely unhappy about. Unfortunately, it's
needed to resolve a security issue (RUSTSEC-2023-0052) and is inevitable.

Closes #355.
2023-08-25 00:01:58 -04:00
Luke Parker
8c1d8a2658 Only emit Preprocesses/Shares when participating 2023-08-24 23:50:19 -04:00
Luke Parker
2702384c70 Coordinator Batch signing test 2023-08-24 23:48:50 -04:00
Luke Parker
d3093c92dc Uncontroversial cargo update 2023-08-24 22:06:08 -04:00
Luke Parker
32df302cc4 Move recognized_id from a channel to an async lambda
Fixes a race condition. Also fixes recognizing batch IDs.
2023-08-24 21:55:59 -04:00
Luke Parker
ea8e26eca3 Use an empty key for Batch's SignId 2023-08-24 20:39:34 -04:00
Luke Parker
bccdabb53d Use a single Substrate signer, per intentions in #227
Removes key from Update as well, since it's no longer variable.
2023-08-24 20:30:50 -04:00
Luke Parker
b91bd44476 Support multiple batches per block by the coordinator
Also corrects an assumption block hash == batch ID.
2023-08-24 19:13:18 -04:00
Luke Parker
dc2656a538 Don't bind to the entire batch, solely the network and ID
This avoids needing to know the Batch in advance, avoiding a race condition.
2023-08-24 18:52:33 -04:00
Luke Parker
67109c648c Use an actual, cryptographically-binding ID for batches in SignId
The intent system expected one.
2023-08-24 18:44:09 -04:00
Luke Parker
61418b4e9f Update Update and substrate_signers to [u8; 32] from Vec<u8>
A commit made while testing moved them from network-key-indexed to
Substrate-key-indexed. Since Substrate keys have a fixed-length, fitting within
the Copy boundary, there's no reason for it to not use an array.
2023-08-24 13:24:56 -04:00
Luke Parker
506ded205a Misc cargo update
Removes 5 crates (presumably duplicated versions).
2023-08-23 13:24:21 -04:00
Luke Parker
9801f9f753 Update to latest Serai substrate (removes statement store) 2023-08-23 13:21:17 -04:00
Luke Parker
8a4ef3ddae jsonrpsee 0.16.3
Removes 3 crates from tree. Now RUSTSEC-2023-0053 is only held up by a lack of
libp2p-websocket release (https://github.com/libp2p/rust-libp2p/pull/4378).
2023-08-23 13:18:51 -04:00
Luke Parker
bfb5401336 Handle v1 Monero TXs spending v2 outputs
Also tightens read/fixes a few potential panics.
2023-08-22 23:26:59 -04:00
Luke Parker
f2872a2e07 Silence RUSTSEC tracked by https://github.com/serai-dex/serai/355
Since it isn't resolvable immediately, it's better to track it than effectively
neuter the deny job (as it would always error).
2023-08-22 22:15:13 -04:00
Luke Parker
c739f00d0b cargo update
1) Updates to time 0.3.27, which allows modern serde_derive's (post binary
fiasco)
2) Updates rustls-webpki to a version not affected by RUSTSEC-2023-0053
3) Updates wasmtime to 12
(d5923df083)
2023-08-22 22:06:06 -04:00
Luke Parker
310a09b5a4 clippy fix 2023-08-22 01:00:18 -04:00
Luke Parker
c65bb70741 Remove SlashVote per #349 2023-08-21 23:48:33 -04:00
Luke Parker
718c44d50e cargo update
One update replaces one package with another.

The Substrate pulls get ed25519-zebra (still in tree) to dalek 4.0. While noise
still pulls in 3.2, for now, this does let us drop one crate.
2023-08-21 22:58:45 -04:00
Luke Parker
1f45c2c6b5 cargo fmt 2023-08-21 10:40:10 -04:00
Luke Parker
76a30fd572 Support no-std builds of bitcoin-serai
Arguably not meaningful, as it adds the scanner yet not the RPC, and no signing
code since modular-frost doesn't support no-std yet. It's a step in the right
direction though.
2023-08-21 08:56:37 -04:00
Luke Parker
a52c86ad81 Don't label Monero nodes invalid for returning invalid keys in outputs
Only recently (I believe the most recent HF) were output keys checked to be
valid. This means returned keys may be invalid points, despite being the
legitimate keys for the specified outputs.

Does still label the node as invalid if it doesn't return 32 bytes,
hex-encoded.
2023-08-21 03:07:20 -04:00
Luke Parker
108504d6e2 Increase MAX_ADDRESS_LEN
In response to
https://gist.github.com/tevador/50160d160d24cfc6c52ae02eb3d17024?permalink_comment_id=4665372#gistcomment-4665372
2023-08-21 02:56:10 -04:00
Luke Parker
27cd2ee2bb cargo fmt 2023-08-21 02:38:27 -04:00
Luke Parker
dc88b29b92 Add keep-alive timeout to coordinator
The Heartbeat was meant to serve for this, yet no Heartbeats are fired when we
don't have active tributaries.

libp2p does offer an explicit KeepAlive protocol, yet it's not recommended in
prod. While this likely has the same pit falls as LibP2p's KeepAlive protocol,
it's at least tailored to our timing.
2023-08-21 02:36:03 -04:00
Luke Parker
45ea805620 Use an optimistic (and twice as long in the worst case) sleep for KeyGen
Possible due to Substrate having an RPC.
2023-08-21 02:31:22 -04:00
Luke Parker
1e68cff6dc Bump bitcoin-serai to 0.3.0 for publication 2023-08-21 01:26:54 -04:00
Luke Parker
906d3b9a7c Merge pull request #348 from serai-dex/current-crypto-crates
Current crypto crates
2023-08-21 01:24:16 -04:00
akildemir
e319762c69 use half-aggregation for tm messages (#346)
* dalek 4.0

* cargo update

Moves to a version of Substrate which uses curve25519-dalek 4.0 (not a rc).
Doesn't yet update the repo to curve25519-dalek 4.0 (as a branch does) due
to the official schnorrkel using a conflicting curve25519-dalek. This would
prevent installation of frost-schnorrkel without a patch.

* use half-aggregation for tm messages

* fmt

* fix pr comments

* cargo update

Achieves three notable updates.

1) Resolves RUSTSEC-2022-0093 by updating libp2p-identity.
2) Removes 3 old rand crates via updating ed25519-dalek (a dependency of
libp2p-identity).
3) Sets serde_derive to 1.0.171 via updating to time 0.3.26 which pins at up to
1.0.171.

The last one is the most important. The former two are niceties.

serde_derive, since 1.0.171, ships a non-reproducible binary blob in what's a
complete compromise of supply chain security. This is done in order to reduce
compile times, yet also for the maintainer of serde (dtolnay) to leverage
serde's position as the 8th most downloaded crate to attempt to force changes
to the Rust build pipeline.

While dtolnay's contributions to Rust are respectable, being behind syn, quote,
and proc-macro2 (the top three crates by downloads), along with thiserror,
anyhow, async-trait, and more (I believe also being part of the Rust project),
they have unfortunately decided to refuse to listen to the community on this
issue (or even engage with counter-commentary). Given their political agenda
they seem to try to be accomplishing with force, I'd go as far as to call their
actions terroristic (as they're using the threat of the binary blob as
justification for cargo to ship 'proper' support for binary blobs).

This is arguably representative of dtolnay's past work on watt. watt was a wasm
interpreter to execute a pre-compiled proc macro. This would save the compile
time of proc macros, yet sandbox it so a full binary did not have to be run.

Unfortunately, watt (while decreasing compile times) fails to be a valid
solution to supply chain security (without massive ecosystem changes). It never
implemented reproducible builds for its wasm blobs, and a malicious wasm blob
could still fundamentally compromise a project. The only solution for an end
user to achieve a secure pipeline would be to locally build the project,
verifying the blob aligns, yet doing so would negate all advantages of the
blob.

dtolnay also seems to be giving up their role as a FOSS maintainer given that
serde no longer works in several environments. While FOSS maintainers are not
required to never implement breaking changes, the version number is still 1.0.
While FOSS maintainers are not required to follow semver, releasing a very
notable breaking change *without a new version number* in an ecosystem which
*does follow semver*, then refusing to acknowledge bugs as bugs with their work
does meet my personal definition of "not actively maintaining their existing
work". Maintenance would be to fix bugs, not introduce and ignore.

For now, serde > 1.0.171 has been banned. In the future, we may host a fork
without the blobs (yet with the patches). It may be necessary to ban all of
dtolnay's maintained crates, if they continue to force their agenda as such,
yet I hope this may be resolved within the next week or so.

Sources:

https://github.com/serde-rs/serde/issues/2538 - Binary blob discussion

This includes several reports of various workflows being broken.

https://github.com/serde-rs/serde/issues/2538#issuecomment-1682519944

dtolnay commenting that security should be resolved via Rust toolchain edits,
not via their own work being secure. This is why I say they're trying to
leverage serde in a political game.

https://github.com/serde-rs/serde/issues/2526 - Usage via git broken

dtolnay explicitly asks the submitting user if they'd be willing to advocate
for changes to Rust rather than actually fix the issue they created. This is
further political arm wrestling.

https://github.com/serde-rs/serde/issues/2530 - Usage via Bazel broken

https://github.com/serde-rs/serde/issues/2575 - Unverifiable binary blob

https://github.com/dtolnay/watt - dtolnay's prior work on precompilation

* add Rs() api to  SchnorrAggregate

* Correct serai-processor-tests to dalek 4

* fmt + deny

* Slash malevolent validators  (#294)

* add slash tx

* ignore unsigned tx replays

* verify that provided evidence is valid

* fix clippy + fmt

* move application tx handling to another module

* partially handle the tendermint txs

* fix pr comments

* support unsigned app txs

* add slash target to the votes

* enforce provided, unsigned, signed tx ordering within a block

* bug fixes

* add unit test for tendermint txs

* bug fixes

* update tests for tendermint txs

* add tx ordering test

* tidy up tx ordering test

* cargo +nightly fmt

* Misc fixes from rebasing

* Finish resolving clippy

* Remove sha3 from tendermint-machine

* Resolve a DoS in SlashEvidence's read

Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.

* Make lazy_static a dev-depend for tributary

* Various small tweaks

One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).

The extra protection added here sorts signed, then concats.

* Fix Tributary tests I broke, start review on tendermint/tx.rs

* Finish reviewing everything outside tests and empty_signature

* Remove empty_signature

empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.

We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.

This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.

* Remove async_sequential for tokio::test

There was no competition for resources forcing them to be run sequentially.

* Modify block order test to be statistically significant without multiple runs

* Clean tests

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>

* Add DSTs to Tributary TX sig_hash functions

Prevents conflicts with other systems/other parts of the Tributary.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-21 01:22:00 -04:00
Luke Parker
f161135119 Add coins/bitcoin audit by Cypher Stack 2023-08-21 01:20:09 -04:00
Luke Parker
d2a0ff13f2 Merge branch 'bitcoin-audit' into develop 2023-08-21 01:16:50 -04:00
Luke Parker
498aa45619 Ban only versions of serde with binary blobs
Does downgrade time from 0.3.26 to 0.3.25 due to time banning > 1.0.171.
Hopefully that's also relaxed soon...
2023-08-21 00:57:22 -04:00
akildemir
39ce819876 Slash malevolent validators (#294)
* add slash tx

* ignore unsigned tx replays

* verify that provided evidence is valid

* fix clippy + fmt

* move application tx handling to another module

* partially handle the tendermint txs

* fix pr comments

* support unsigned app txs

* add slash target to the votes

* enforce provided, unsigned, signed tx ordering within a block

* bug fixes

* add unit test for tendermint txs

* bug fixes

* update tests for tendermint txs

* add tx ordering test

* tidy up tx ordering test

* cargo +nightly fmt

* Misc fixes from rebasing

* Finish resolving clippy

* Remove sha3 from tendermint-machine

* Resolve a DoS in SlashEvidence's read

Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.

* Make lazy_static a dev-depend for tributary

* Various small tweaks

One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).

The extra protection added here sorts signed, then concats.

* Fix Tributary tests I broke, start review on tendermint/tx.rs

* Finish reviewing everything outside tests and empty_signature

* Remove empty_signature

empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.

We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.

This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.

* Remove async_sequential for tokio::test

There was no competition for resources forcing them to be run sequentially.

* Modify block order test to be statistically significant without multiple runs

* Clean tests

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-21 00:28:23 -04:00
Luke Parker
8973eb8ac4 fmt + deny 2023-08-20 00:14:53 -04:00
Luke Parker
34397b31b1 Correct serai-processor-tests to dalek 4 2023-08-19 16:34:27 -04:00
Luke Parker
96583da3b9 cargo update
Achieves three notable updates.

1) Resolves RUSTSEC-2022-0093 by updating libp2p-identity.
2) Removes 3 old rand crates via updating ed25519-dalek (a dependency of
libp2p-identity).
3) Sets serde_derive to 1.0.171 via updating to time 0.3.26 which pins at up to
1.0.171.

The last one is the most important. The former two are niceties.

serde_derive, since 1.0.171, ships a non-reproducible binary blob in what's a
complete compromise of supply chain security. This is done in order to reduce
compile times, yet also for the maintainer of serde (dtolnay) to leverage
serde's position as the 8th most downloaded crate to attempt to force changes
to the Rust build pipeline.

While dtolnay's contributions to Rust are respectable, being behind syn, quote,
and proc-macro2 (the top three crates by downloads), along with thiserror,
anyhow, async-trait, and more (I believe also being part of the Rust project),
they have unfortunately decided to refuse to listen to the community on this
issue (or even engage with counter-commentary). Given their political agenda
they seem to try to be accomplishing with force, I'd go as far as to call their
actions terroristic (as they're using the threat of the binary blob as
justification for cargo to ship 'proper' support for binary blobs).

This is arguably representative of dtolnay's past work on watt. watt was a wasm
interpreter to execute a pre-compiled proc macro. This would save the compile
time of proc macros, yet sandbox it so a full binary did not have to be run.

Unfortunately, watt (while decreasing compile times) fails to be a valid
solution to supply chain security (without massive ecosystem changes). It never
implemented reproducible builds for its wasm blobs, and a malicious wasm blob
could still fundamentally compromise a project. The only solution for an end
user to achieve a secure pipeline would be to locally build the project,
verifying the blob aligns, yet doing so would negate all advantages of the
blob.

dtolnay also seems to be giving up their role as a FOSS maintainer given that
serde no longer works in several environments. While FOSS maintainers are not
required to never implement breaking changes, the version number is still 1.0.
While FOSS maintainers are not required to follow semver, releasing a very
notable breaking change *without a new version number* in an ecosystem which
*does follow semver*, then refusing to acknowledge bugs as bugs with their work
does meet my personal definition of "not actively maintaining their existing
work". Maintenance would be to fix bugs, not introduce and ignore.

For now, serde > 1.0.171 has been banned. In the future, we may host a fork
without the blobs (yet with the patches). It may be necessary to ban all of
dtolnay's maintained crates, if they continue to force their agenda as such,
yet I hope this may be resolved within the next week or so.

Sources:

https://github.com/serde-rs/serde/issues/2538 - Binary blob discussion

This includes several reports of various workflows being broken.

https://github.com/serde-rs/serde/issues/2538#issuecomment-1682519944

dtolnay commenting that security should be resolved via Rust toolchain edits,
not via their own work being secure. This is why I say they're trying to
leverage serde in a political game.

https://github.com/serde-rs/serde/issues/2526 - Usage via git broken

dtolnay explicitly asks the submitting user if they'd be willing to advocate
for changes to Rust rather than actually fix the issue they created. This is
further political arm wrestling.

https://github.com/serde-rs/serde/issues/2530 - Usage via Bazel broken

https://github.com/serde-rs/serde/issues/2575 - Unverifiable binary blob

https://github.com/dtolnay/watt - dtolnay's prior work on precompilation
2023-08-19 01:50:38 -04:00
Luke Parker
34c6974311 Merge branch 'dalek-4.0' into develop 2023-08-17 02:00:36 -04:00
Luke Parker
5a4db3efad cargo update
Moves to a version of Substrate which uses curve25519-dalek 4.0 (not a rc).
Doesn't yet update the repo to curve25519-dalek 4.0 (as a branch does) due
to the official schnorrkel using a conflicting curve25519-dalek. This would
prevent installation of frost-schnorrkel without a patch.
2023-08-17 01:25:50 -04:00
Luke Parker
28399b0310 cargo update 2023-08-15 05:39:23 -04:00
Luke Parker
426e89d6fb Extend sleeps of coordinator CI tests
The CI does now get past the first check to the second one, hence the addition
of similar sleeps to the second and so on checks.
2023-08-15 05:31:06 -04:00
Luke Parker
4850376664 Try to fix coordinator CI by sleeping longer when in CI 2023-08-14 16:02:14 -04:00
Luke Parker
f366d65d4b Add RUST_BACKTRACE=1 to .github
Should help resolve the coordinator tests, which are failing in the current CI.
Presumably a timeout which is too low for the CI's slower runners.
2023-08-14 11:59:26 -04:00
akildemir
e680eabb62 Improve batch handling (#316)
* restrict batch size to ~25kb

* add batch size check to node

* rate limit batches to 1 per serai block

* add support for multiple batches for block

* fix review comments

* Misc fixes

Doesn't yet update tests/processor until data flow is inspected.

* Move the block from SignId to ProcessorMessage::BatchPreprocesses

* Misc clean up

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-14 11:57:38 -04:00
Luke Parker
a3441a6871 Don't have publish return the 'hash' 2023-08-14 08:18:19 -04:00
Luke Parker
9895ec0f41 Add note to processor 2023-08-14 06:54:34 -04:00
Luke Parker
666bb3e96b E2E test coordinator KeyGen 2023-08-14 06:54:17 -04:00
Luke Parker
acc19e2817 Stop attempting to call set_keys if another validator does
This prevents this function from hanging ad-infinitum.
2023-08-14 06:53:23 -04:00
Luke Parker
5e02f936e4 Perform MuSig signing of generated keys 2023-08-14 06:08:55 -04:00
Luke Parker
6b41c91dc2 Correct TendermintMachine sleep on construction
For logging purposes, I added code to handle negative time till start. I forgot
to only sleep on positive time till start.

Should fix the recent CI failure.
2023-08-13 05:57:14 -04:00
Luke Parker
8992504e69 Correct Dockerfile with typo 2023-08-13 05:11:49 -04:00
Luke Parker
e2901cab06 Revert round-advance on TendermintMachine::new if local clock is ahead of block start
It was improperly implemented, as it assumed rounds had a constant time
interval, which they do not. It also is against the spec and was meant to
absolve us of issues with poor performance when post-starting blockchains. The
new, and much more proper, workaround for the latter is a 120-second delay
between the Substrate time and the Tributary start time.
2023-08-13 04:35:46 -04:00
Luke Parker
13a8b0afc1 Add panic-handlers which exit on any panic
By default, tokio-spawned worker panics will only kill the task, not the
program. Due to our extensive use of panicking on invariants, we should ensure
the program exits.
2023-08-13 04:30:49 -04:00
Luke Parker
7e71450dc4 Bug fixes and log statements
Also shims next nonce code with a fine-for-now piece of code which is unviable
in production, yet should survive testnet.
2023-08-13 04:03:59 -04:00
Luke Parker
049fefb5fd Move apt calls in Docker to enable caching 2023-08-12 22:51:12 -04:00
Luke Parker
6a89cfcd08 Update to further-pruned substrate 2023-08-09 05:00:44 -04:00
Luke Parker
4571a8ad85 cargo update 2023-08-08 20:24:23 -04:00
Luke Parker
96db784b10 Increase the reproducible-runtime timeout as it was getting hit in CI 2023-08-08 18:36:31 -04:00
Luke Parker
dd523b22c2 Correct transcript minimum version requirements 2023-08-08 18:32:13 -04:00
Luke Parker
fa406c507f Update crypto/ package versions
On a branch while bitcoin-serai wraps up its audit.
2023-08-08 18:19:01 -04:00
Luke Parker
ad51c123e3 Fix Tendermint bug from prior commit 2023-08-08 16:33:09 -04:00
Luke Parker
f6f945e747 Add a LibP2P instantiation to coordinator
It's largely unoptimized, and not yet exclusive to validators, yet has basic
sanity (using message content for ID instead of sender + index).

Fixes bugs as found. Notably, we used a time in milliseconds where the
Tributary expected  seconds.

Also has Tributary::new jump to the presumed round number. This reduces slashes
when starting new chains (whose times will be before the current time) and was
the only way I was able to observe successful confirmations given current
surrounding infrastructure.
2023-08-08 15:12:47 -04:00
Luke Parker
0dd8aed134 Expand cluster-sm/local testnet to 4 validators for BFT where f=1 2023-08-06 13:42:16 -04:00
Luke Parker
cee788eac3 Test the Coordinator emits KeyGen
Mainly just a test that the full stack is properly set up and we've hit basic
functioning for further testing.
2023-08-06 12:38:44 -04:00
Luke Parker
bebe2fae0e cargo update due to substrate changes, again 2023-08-06 10:11:44 -04:00
Luke Parker
adb48cd737 Use 1.71.1 for reproducible runtime
1.71.1 fixes a CVE which wasn't at risk here yet still should be resolved.
2023-08-06 10:11:01 -04:00
Luke Parker
363a88c4ec Again update after latest series of substrate prunes 2023-08-06 08:04:56 -04:00
Luke Parker
5ba1dd2524 cargo update 2023-08-06 02:36:16 -04:00
Luke Parker
784c7b47a4 Add Immunefi to README 2023-08-04 12:28:15 -04:00
Luke Parker
376b36974f Stub binaries' code when features binaries is not set
Allows running `cargo build` in monero-serai and message-queue without
erroring, since it'd automatically try to build the binaries which require
additional features.

While we could make those features not optional, it'd increase time to build
and disk space required, which is why the features exist for monero-serai and
message-queue in the first place (since both are frequently used as libs).
2023-08-02 14:43:49 -04:00
Luke Parker
38ad1d4bc4 Add msrv definitions to common and crypto
This will effectively add msrv protections to the entire project as almost
everything grabs from these.

Doesn't add msrv to coins as coins/bitcoin is still frozen.

Doesn't add msrv to services since cargo msrv doesn't play nice with anything
importing the runtime.
2023-08-02 14:17:57 -04:00
Luke Parker
aab8a417db Have the Coordinator scan the Substrate genesis block
Also adds a workflow for running tests/coordinator.
2023-08-02 12:18:50 -04:00
Luke Parker
d5c787fea2 Add initial coordinator e2e tests 2023-08-01 19:00:48 -04:00
Luke Parker
e3a70ef0dc tests/processor clippy 2023-08-01 05:33:08 -04:00
Luke Parker
044b299cda cargo +nightly fmt (again) 2023-08-01 02:51:58 -04:00
Luke Parker
53d86e2a29 Latest clippy 2023-08-01 02:49:31 -04:00
Luke Parker
c338b92067 Update to latest nightly Rust 2023-08-01 00:48:11 -04:00
Luke Parker
3c38a0ec11 cargo +nightly fmt 2023-08-01 00:47:36 -04:00
Luke Parker
88f88b574c Sleep for up to a minute when creating a Coordinator if the network RPC has yet to boot
We've had a pair of CI failures due to calling add_block and being unable to
form an RPC connection. This attempts to fix this
2023-07-31 00:13:58 -04:00
Luke Parker
9f143a9742 Replace "coin" with "network"
The Processor's coins folder referred to the networks it could process, as did
its Coin trait. This, and other similar cases throughout the codebase, have now
been corrected.

Also corrects dated documentation for a key pair is confirmed under the
validator-sets pallet.
2023-07-30 16:11:30 -04:00
Luke Parker
2815046b21 Save the scheduler to disk
This is a horrible impl which does a full ser of everything on every change.
It's just the minimal changes to resolve this TODO and able testnet deployment.
2023-07-30 14:16:33 -04:00
Luke Parker
d8033504c8 Restore ports over expose so docker compose can be used for tests
Reduces security when considering a production deployment.
2023-07-30 13:37:10 -04:00
Luke Parker
48ad5fe20b Add icon as a svg 2023-07-30 12:51:17 -04:00
Luke Parker
4c801df4f2 Don't run apps in Docker as root 2023-07-30 08:04:36 -04:00
Luke Parker
9b79c4dc0c Test Eventuality completion via CoordinatorMessage::Completed 2023-07-30 07:00:54 -04:00
Luke Parker
e010d66c5d Only emit SignTranaction once
Due to the ordered message-queue, there's no benefit to multiple emissions as
there's no risk a completion will be missed. If it has yet to be read, sending
another which only be read after isn't helpful.

Simplifies code a decent bit.
2023-07-30 06:44:55 -04:00
Luke Parker
3d91fd88a3 Drastically increase placeholder fee for Monero
Needed for Docker tests.
2023-07-29 20:29:45 -04:00
Luke Parker
f988c43f8d Extend send_test with TX signing
Monero fails with fee_too_low, which this commit is meant to document.
2023-07-29 08:34:08 -04:00
Luke Parker
857e3ea72b cargo update 2023-07-29 07:11:53 -04:00
Luke Parker
8b14bb54bb Correct the fee amortization algorithm for an edge case
This is technically over-agressive, as a dropped output will reduce the fee,
yet this edge case is so minor the flow for it to not be over-aggressive (over
a few fractions of a cent) is by no means worth it.

Fixes the crash causable by the WIP send_test.
2023-07-29 07:05:55 -04:00
Luke Parker
f78332453b Start work on a send_test
Stops work where it does to the processor panickinng for Monero, yet not
Bitcoin, under what's present.

Cleans up processor tests to consolidate shared code.
2023-07-29 04:26:24 -04:00
Luke Parker
22da7aedde Implement a pretty Debug for various objects 2023-07-29 04:12:10 -04:00
Luke Parker
2641b83b3e Use resolver 2 2023-07-27 23:42:24 -04:00
Luke Parker
4980e6b704 cargo update 2023-07-27 23:38:04 -04:00
Luke Parker
c091b86919 Correct reproducible-runtime deny definition 2023-07-27 22:25:05 -04:00
Luke Parker
a8c7bb96c8 Add a crate to test the runtime can be reproducibly built 2023-07-27 21:42:26 -04:00
Luke Parker
09a95c9bd2 Rename deploy to orchestration
Also updates README to note prior unnoted folders.
2023-07-27 03:19:35 -04:00
Luke Parker
101da0a641 Use a BatchVerifier in reserialize_chain 2023-07-27 03:05:39 -04:00
Luke Parker
6d5851a9ee Use lz4 instead of zstd for the DB
zstd was recommended for the base layer only, due to its CPU requirements. That
was a misreading on mhy behalf.

lz4 gets ~5% better compression than snappy with ~30% faster performance. zstd
does ~25% better than lz4 yet at ~30% of the performance.
2023-07-26 14:05:10 -04:00
Luke Parker
64c309f8db Test Batches with Instructions 2023-07-26 14:02:17 -04:00
Luke Parker
e00aa3031c Have Shorthand::Raw contain RefundableInInstruction, not an encoded RII 2023-07-26 14:00:30 -04:00
Luke Parker
f8afb040dc Remove ApplicationCall
We can simply inline `Dex` into the InInstruction enum.
2023-07-26 12:45:51 -04:00
Luke Parker
7823ece4fe Test multiple batches, re-attempts, randomized selected signers 2023-07-26 05:55:47 -04:00
Luke Parker
b205391b28 Add libssl-dev to message-queue build packages 2023-07-26 04:33:36 -04:00
Luke Parker
32a937ddb9 Move the Monero Dockerfile to alpine 2023-07-26 04:22:41 -04:00
Luke Parker
bdbeedc723 Add pkg-config to Dockerfiles 2023-07-26 03:57:13 -04:00
Steven Chang
f306618e84 docs/protocol/Staking.md: delete 2023-07-26 03:46:55 -04:00
Luke Parker
89865b549c Update Serai node Dockerfile 2023-07-26 03:45:30 -04:00
Luke Parker
39eae2795f Update Dockerfiles to bookworm, successfully
Removes use of the Parity CI image. Always uses slim variants.
2023-07-26 03:06:01 -04:00
Luke Parker
0eb56406a4 Further dependency minimization for build times 2023-07-26 03:03:44 -04:00
Luke Parker
afb385fba4 Use spin's Once for OnceLock 2023-07-26 02:59:24 -04:00
Luke Parker
821f5d8de4 Restore create_if_missing to RocksDB code 2023-07-25 23:00:10 -04:00
Luke Parker
3862731a12 Minimize features pulled in to try and reduce build times 2023-07-25 22:29:39 -04:00
Luke Parker
42eb674d1a Print docker build 2023-07-25 22:18:20 -04:00
Luke Parker
1af5f1bcdc Revert from bookworm to bullseye
Parity's CI uses bullseye and bullseye has an incompatible, dynamically linked
openssl.
2023-07-25 22:15:33 -04:00
Luke Parker
32435d8a4c Consolidate RockDB code
Moves explicitly to zstd. RockDB recommends zstd, or at least lz4 over snappy,
and this minimizes which dependencies we pull in.
2023-07-25 21:43:27 -04:00
Luke Parker
49ce792b91 Move from bullseye-slim to bookworm-slim
Also moves from bullseye in the processor to bullseye-slim. This requires
adding back the apt intall, yet the tests/docker cache should handle it.

Minimizes processor image surface, hopefully also shrinks the CI down a bit.
2023-07-25 21:10:28 -04:00
Luke Parker
4949793c3f Clear docker cache after building in CI
We're at the CI storage limits, so hopefully this helps.
2023-07-25 21:09:40 -04:00
Luke Parker
61d46dccd4 Rename scan_test to batch_test 2023-07-25 18:10:05 -04:00
Luke Parker
88a1fce15c Test the processor's batch signing
Updates message-queue ot try recv every second, not 5.
2023-07-25 18:09:23 -04:00
Luke Parker
a2493cfafc Sub-CoordinatorMessage -> CoordinatorMessage via From/Into 2023-07-25 17:33:05 -04:00
Luke Parker
e3de64d5ff Check the processors picked up the received input 2023-07-24 22:11:58 -04:00
Luke Parker
ecd0457d5b clippy fixes 2023-07-24 21:49:51 -04:00
Luke Parker
7990ee689a Send to a processor from a test
Mainly here to build out the infra. Does not automate checking
recipience/batch creation yet.
2023-07-24 20:06:05 -04:00
Luke Parker
f05e909d0e Fix which key is used to index substrate_signers on ScannerEvent::Block
First notably bug found by docker tests.
2023-07-24 19:38:31 -04:00
Luke Parker
5e565fa3ef Correct when the Processor starts using the first key
It waited for CONFIRMATIONS + 1 confirmations, instead of CONFIRMATIONS
confirmations.

Also adds a lib interface to access the coin traits and its constants.
2023-07-24 15:36:35 -04:00
Luke Parker
6df1b46313 Don't use dbg for printing stdout/stderr
They are byte buffers, not strings. A pretty print has been added accordingly.
2023-07-24 15:35:43 -04:00
Luke Parker
24dba66bad cargo update 2023-07-24 04:54:13 -04:00
Luke Parker
fd585d496c Resolve #321 2023-07-24 04:53:59 -04:00
Luke Parker
5703591eb2 Extend critria to run Docker tests
The unit tests should be sufficient for these cases, making this exraneous, yet
better to be complete than at risk.
2023-07-24 02:56:58 -04:00
Luke Parker
9ac3b203c8 Fix panic causable by remote node 2023-07-24 02:53:54 -04:00
Luke Parker
23e1c9769c dalek 4.0 2023-07-23 14:32:14 -04:00
Luke Parker
8e6e05ae2d Set better logging defaults for Docker tests 2023-07-23 10:13:11 -04:00
Luke Parker
713660c79c Make key_gen a gadget, add ConfirmKeyPair 2023-07-22 05:10:40 -04:00
Luke Parker
cb8c8031b0 Correct retrieval of LastTagTime when the Docker image doesn't already exist 2023-07-22 04:37:45 -04:00
Luke Parker
d07447fe97 Implement an (almost) full Key Gen test for processor's Docker tests
It doesn't confirm the key pair yet.

Adds the infra neded to test processors against each other.
2023-07-22 04:06:44 -04:00
Luke Parker
c26beae0f9 Only rebuild Docker images when their source has been modified 2023-07-22 02:40:14 -04:00
Luke Parker
818215b570 Correct Dockerfile caching 2023-07-22 01:12:39 -04:00
Luke Parker
79943c3a6c MessageQueue::new 2023-07-22 01:12:15 -04:00
Luke Parker
076a8e4d62 Add the requirement for a debug Serai node to running tests documentation 2023-07-21 15:11:22 -04:00
Luke Parker
ffd1457927 Correct message-queue Dockerfile
It worked with my cache yet not without cache.
2023-07-21 15:08:37 -04:00
Luke Parker
523a055b74 Add processor Docker tests
Adds tests/docker for code common to Docker-based tests.
2023-07-21 14:08:42 -04:00
Luke Parker
624fb2781d Update how RPCs are handled
The processor now takes three vars and joins them itself. message-queue uses a
single argument, with defaults, as it's a service we control.
2023-07-21 14:01:42 -04:00
Luke Parker
641077a089 Use non-slim variants to remove needing to apt additional packages
Prevents needing to rebuild all the time via allowing cache hits.
2023-07-21 13:58:54 -04:00
Luke Parker
298d1fd3ba Slim coins and processor Dockerfiles 2023-07-21 04:27:14 -04:00
Luke Parker
92c3403698 Slim down the message-queue Dockerfile
While I tried Alpine, RocksDB won't build unless it's statically linked. Then
async-trait (and proc-macro's in general) won't compile when statically linked.
2023-07-21 04:07:32 -04:00
Luke Parker
37af8b51b3 Fallback to pgrep if pidof is unavailable 2023-07-21 03:37:48 -04:00
Luke Parker
900298b94b CI tweaks 2023-07-20 19:34:10 -04:00
Luke Parker
9effd5ccdc Add a Docker-based test for the message-queue service 2023-07-20 18:53:11 -04:00
Luke Parker
ceeb57470f Print when ConnectionErrors occur in reserialize_chain 2023-07-20 18:53:11 -04:00
Steven Chang
69454fa9bb .rustmfmt.toml: add edition 2023-07-20 15:28:03 -04:00
Luke Parker
5121ca7519 Handle the minimum relay fee 2023-07-20 01:20:28 -04:00
Luke Parker
1eb3b364f4 Correct dust constant 2023-07-20 00:29:31 -04:00
Luke Parker
f66fe3c1cb 3.10 Remove use of Network::Bitcoin
All uses were safe due to addresses being converted to script_pubkeys which
don't embed their network. The only risk of there being an issue is if a
future address spec did embed the net ID into the script_pubkey and that was
moved to.

This resolves the audit note and does offer that tightening.
2023-07-20 00:27:56 -04:00
Luke Parker
6f9d02fdf8 3.11 Better document API expectations 2023-07-19 23:51:21 -04:00
Luke Parker
28613400b8 message-queue and processor Dockerfiles 2023-07-19 20:13:16 -04:00
Luke Parker
b6579d5a2a Update Dockerfiles
Cleans files, updates Bitcoin and Monero versions, fixes Monero immediately
exiting.
2023-07-19 19:22:49 -04:00
Luke Parker
c2f32e7882 Update to FROST v14 2023-07-19 15:48:34 -04:00
Justin Berman
228e36a12d monero-serai: fee calculation parity with Monero's wallet2 (#301)
* monero-serai: fee calculation parity with Monero's wallet2

* Minor lint

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-07-19 15:06:05 -04:00
Luke Parker
38bcedbf11 Cargo.lock for prior commit 2023-07-19 00:58:46 -04:00
Luke Parker
98f9fc2c2f Pin to serde 1.0.167 due to https://github.com/monero-rs/monero-rs/issues/162 2023-07-19 00:53:03 -04:00
Luke Parker
d5cfb3fb25 Corrections from renaming Index to Nonce 2023-07-18 23:52:37 -04:00
Luke Parker
b1dbe1f50d hashbrown 0.14 in validator-sets 2023-07-18 23:25:15 -04:00
Luke Parker
1b57d655ed Update to subxt 0.29 2023-07-18 23:01:51 -04:00
Luke Parker
64402914ba Update substrate 2023-07-18 22:30:55 -04:00
Luke Parker
a7c9c1ef55 Integrate coordinator with MessageQueue and RocksDB
Also resolves a couple TODOs.
2023-07-18 01:53:51 -04:00
Luke Parker
a05961974a Lint a couple TODOs in the processor 2023-07-17 18:11:21 -04:00
Luke Parker
acc9495429 Use MessageQueue instead of MemCoordinator in processor
Also has it use RocksDB.
2023-07-17 18:02:29 -04:00
Luke Parker
344ac9cbfc Add ack signatures
Also modifies message signatures to be binding to from, not just from's key.
2023-07-17 17:40:34 -04:00
Luke Parker
6ccac2d0ab Add a message-queue connection to processor
Still needs love, yet should get us closer to starting testing.
2023-07-17 15:49:17 -04:00
Luke Parker
56f7037084 Correct get_o_indexes to work for 0-output TXs 2023-07-17 12:57:17 -04:00
Luke Parker
a0f8214d48 Resolve clippy 2023-07-17 10:53:47 -04:00
Luke Parker
5f93140ba5 Have reserialize_chain automatically retry on ConnectionError
Fixes expectations of formatting by expect as well.
2023-07-17 03:14:49 -04:00
Luke Parker
712f11d879 Correct the message-queue's handling of var 2023-07-17 02:01:31 -04:00
Luke Parker
808a633e4d Split CI to reduce tests run
common/ is now only run when common is edited. crypto/ when common/ or crypto/.
coins/ when common/ or crypto/ or coins/. The rest of the tests are run
whenever any package is edited (as they're all inter-connected).
2023-07-17 01:06:56 -04:00
Luke Parker
0a367bfbda Add common crate to access env variables
In the future, we should use a proper secret store (not just env variables).
This lets us update one block of code and not n in the future.
2023-07-17 00:53:05 -04:00
Luke Parker
845c2842b5 message-queue RocksDB + fix listening 2023-07-17 00:22:55 -04:00
Luke Parker
8543487db2 Document message-queue RPC methods 2023-07-16 20:53:58 -04:00
Luke Parker
a9072e6b1b Remove signature from get_next_message
Duee to signature replaying, it's very annoying to provide meanigful data
access privacy. None of these messages should be private/have sensitive data
anyways though.
2023-07-16 20:38:13 -04:00
Luke Parker
b0c28a1cf0 Move message_queue over to deduplication via intents
Due to each service having multiple distinct clocks, we can't expect a stable
ordering except the ordering an intact message-queue provides. The messages
emitted should be consistent however, solely with unknown order, which is why
we can craft intents based on their contents (already implemented by
processor-messages).
2023-07-16 20:34:35 -04:00
akildemir
23b9d57305 add polyseed support (#257)
* add polyseed support

* fix pr comments

* fix tests

* Embed the mempool into the Blockchain

* Plan scheduled payments whenever outputs are received

The scheduler prior waited for the next series of payments to be added.

* Replace Tendermint step with sync_block

Step moved a step forward after an externally synced/added block. This created
a race condition to add the block between the sync process and the Tendermint
machine. Now that the block routes through Tendermint, there is no such race
condition.

* Finish binding Tendermint into Tributary and define a Tributary master object

* Add correction the last commit missed

* Add DoS limits to tributary and require provided transactions be ordered

* Fix the scheduler from dropping UTXOs when there weren't any payments

* Documentation and cargo update

* Add a dedicated db crate with a basic DB trait

It's needed by the processor and tributary (coordinator).

* Add a DB to Tributary

Adds support for reloading most of the blockchain.

* Reloaded provided transactions from the disk

Also resolves a race condition by asserting provided transactions must be
unique, allowing them to be safely provided multiple times.

* must_use annotations on DbTxn

* Support reloading the mempool from disk

* Add a NewSet event to validator-sets

Updates to the latest serai-dex/substrate due to depending on
10ccaca0eb498a2316bbf627d419b29b1a75933a.

* Add basic getters to tributary

* cargo update

* Update to the latest subxt

Writes a custom unsigned extrinic creator due to subxt having an internal error
with the scale metadata. While the code in our scope increased, it's much more
ergonomic to our usage. We may end up rewriting most of subxt, eventually.

* Make unsigned private due to unsafe calling potential

* Start defining the coordinator

* Merge AckBlock with Burns

Offers greater efficiency while reducing concerns re: atomicity.

* Correct processor flow to have the coordinator decide signing set/re-attempts

The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.

Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.

* cargo +nightly fmt

* cargo update

Since p256 now pulls in an extra crate with this update, the {k,p}256 imports
disable default-features to prevent growing the tree.

* Support extracting timestamps from blocks

* Make progres on handling NewSet events

Further bones out the coordinator.

* Resolve #245

* Have InInstructions track the latest block for a network in storage

* Fill out code for the rest of the Substrate events

* Clean up the Substrate block processing code

* Rename transaction file to tributary, add function for genesis

* Add a processor API to the coordinator

* Add extensive commentary on mutable to the processor's main file

Clearly establishes why consistency is guaranteed from a Rust borrow-checker
mindset. While there are plenty of... 'violations', they're clearly explained.

Hopefully, this method of thinking helps promote/ensure consistency in the
future.

* Move ConfirmKeyPair from key_gen to substrate

Clarifies the emitter and accordingly why its mutations are justified.

* Remove BatchSigned

SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.

* Add note to processor_messages

* Use a single txn for an entire coordinator message

Removes direct DB accesses whre possible. Documents the safety of the rest.
Does uncover one case of unsafety not previously noted.

* cargo update to remove usage of yanked crate

* Clarify safety of Scanner::block_number and KeyGen::keys

* Tweak ConfirmKeyPair to alleviate database requirements of coordinator

* Use an enum for Coin/NetworkId

It originally wasn't an enum so software which had yet to update before an
integration wouldn't error (as now enums are strictly typed). The strict typing
is preferable though.

* Code a method to determine the activation block before any block has consensus

[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.

This doesn't use an Option for ergonomic reasons.

* automate whitespace & trimming test cases

* Save keys by their tweaked group_key

Keys are referred to by their tweaked versions. If a tweak was needed, keys
would fail to confirm.

* Use crypto-bigint's reduction in ed448

Achieves feasible performance in the ed448 which makes it potentially viable
for real world usage.

Accordingly prepares a new release, updating the README.

* Move the entirety of ed448 to Residue, offering a further 2-4x speedup

* Resolve #68

Notably speeds up monero-serai's build and CLSAG performance.

* Make MainDB into SubstrateDB

* Initial Tributary handling

* Add additional checks to key_gen/sign

There is the ability to cause state bloat by flooding Tributary.
KeyGen/Sign specifically shouldn't allow bloat since we check the
commitments/preprocesses/shares for validity. Accordingly, any invalid data
(such as bloat) should be detected.

It was posssible to place bloat after the valid data. Doing so would be
considered a valid KeyGen/Sign message, yet could add up to 50k kB per sign.

* Apply DKG TX handling code to all sign TXs

The existing code was almost entirely applicable. It just needed to be scoped
with an ID. While the handle function is now a bit convoluted, I don't see a
better option.

* Split FinalizedBlock into ExternalBlock and SeraiBlock

Also re-arranges their orders.

* Add support for multiple orderings in Provided

Necessary as our Tributary chains needed to agree when a Serai block has
occurred, and when a Monero block has occurred. Since those could happen at the
same time, some validators may put SeraiBlock before ExternalBlock and vice
versa, causing a chain halt. Now they can have distinct ordering queues.

* Slash on unrecognized ID

* ExternalBlock handler

* Add a SubstrateBlockAck message to the processor

When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.

This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.

Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.

This creates an explicitly proper async flow not reliant on waiting for data
availability.

Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.

* Route the SubstrateBlock message, which is the last Tributary transaction type

* Add recent bloat checks added to signer to substrate_signer as well

* Add no_std support to transcript, dalek-ff-group, ed448, ciphersuite, multiexp, schnorr, and monero-generators

transcript, dalek-ff-group, ed449, and ciphersuite are all usable with no_std
alone. The rest additionally require alloc.

Part of #279.

* Add a test to the coordinator for running a Tributary

Impls a LocalP2p for testing.

Moves rebroadcasting into Tendermint, since it's what knows if a message is
fully valid + original.

Removes TributarySpec::validators() HashMap, as its non-determinism caused
different instances to have different round robin schedules. It was already
prior moved to a Vec for this issue, so I'm unsure why this remnant existed.

Also renames the GH no-std workflow from the prior commit.

* Add a test for Tributary

Further fleshes out the Tributary testing code.

* Test handling of DKG commitments transactions

* Add Transaction::sign.

While I don't love the introduction of empty_signed, it's practically fine.

* Tributary test wait_for_tx_inclusion function

* Additionally test DKGShares

* Handle adding new Tributaries

Removes last_block as an argument from Tendermint. It now loads from the DB as
needed. While slightly less performant, it's easiest and should be fine.

* Reload Tributaries

add_active_tributary writes the spec to disk before it returns, so even if the
VecDeque it pushes to isn't popped, the tributary will still be loaded on boot.

* Start handling P2P messages

This defines the tart of a very complex series of locks I'm really unhappy
with. At the same time, there's not immediately a better solution. This also
should work without issue.

* Clarify Arc RwLocks and sleeps in coordinator

* Send a heartbeat message when a Tributary falls behind

* cargo fmt

* cargo update

* Move json word lists to rs

Allows building the seed code without serde_json.

* Break coordinator main into multiple functions

Also moves from std::sync::RwLock to tokio::sync::RwLock to prevent wasting
cycles on spinning.

* Remove reliance on a blockchain read lock from block/commit

* Implement Tributary syncing

Also adds a forwards-lookup to the Tributary blockchain.

* Don't return from sync_block until the Tendermint machine returns if it's valid or not

We had a race condition where'd we be informed of blocks 1 .. 3, and
immediately add 1 .. 3. Because we immediately tried to add 2 after 1, it'd
fail since the tip was still the genesis, yet 2 needs the tip to be 1.

Adding a channel, while ugly, was the simplest way to accomplish this.

Also has any added block be broadcasted. Else there's a race condition where a
node which syncs up to the most recent block does so, yet fails to add the next
block when it's committed to.

* Test handle_p2p and Tributary syncing

Includes bug fixes.

* Tweak tests workflow

* Add a TributaryReader which doesn't require a borrow to operate

Reduces lock contention.

Additionally changes block_key to include the genesis. While not technically
needed, the lack of genesis introduced a side effect where any Tributary on the
the database could return the block of any other Tributary. While that wasn't a
security issue, returning it suggested it was on-chain when it wasn't. This may
have been usable to create issues.

* Document panic in FROST

* Document a pair of panics requiring 256 GB of RAM/4 GB of a context

* Add a UID function to messages

When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.

Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.

* Initial code to handle messages from processors

* Document the processor/tributary/coordinator/serai flow

* Have Coordinator MainDb take a mutable borrow

* Update to substrate polkadot-v0.9.42

* Correct error message in ff-group-tests

* Update to May's nightly

Doesn't use the PR due to the needed changes.

* Support arbitrary RPC providers in monero-serai

Sets a clean path for no-std premised RPCs (buffers to an external RPC impl)/
Tor-based RPCs/client-side load balancing/...

* Correct processor's handling of the new Monero RPC code

* Correct Serai Dockerfile

* Publish ExternablBlock/SubstrateBlock, delay *Preprocess until ID acknowledged

Adds a channel for the Tributary scanner to communicate when an ID has been
acknowledged.

* Rename uid to intent

* Use U448 for Ed448 instead of U512

* Spawn a new async task for each block message

This probably should be done with n-long lived tasks, one per Tributary. While
this may not be suitably performant long-term (potential DoS vector), this at
least resolves the halting concerns.

* Move the coordinator to a n-processor design

* Ensure Tributary commits are minimal

* Properly get genesis for a Processor message

* Create a vote transaction upon GeneratedKeyPair

* Remove TODO about code de-duplication

It's infeasible to write a macro/function there. Does add a type alias which
makes things cleaner.

* Have coordinator publish batches to Substrate

* Implement MuSig key aggregation into DKG

Isn't spec compliant due to the lack of a spec to be compliant too.

Slight deviation from the paper by using a unique list instead of a multiset.

Closes #186, progresses #277.

* Correct 2/3rds definitions throughout the codebase

The prior formula failed for some values, such as 20.
20 / 3 = 6, * 2 = 12, + 1 = 13. 13 is 65%, not >= 67.

* cargo update

Resolves a yanked crate and removes some duplicated dependencies.

* Add a dedicated function to get a MuSig key

* Do the minimal amount of work for dkg to compile under no-std

The Substrate runtime requires access to the MuSig key aggregation function.

\#279 related.

* Use a MuSig signature to publish validator set key pairs to Serai

The processor/coordinator flow still has to be rewritten.

* Correct various no_std definitions

* Add a context to MuSig key aggregation

* Use proper messages for ValidatorSets/InInstructions pallet

Provides a DST, and associated metadata as beneficial.

Also utilizes MuSig's context to session-bind. Since set_keys_messages also
binds to set, this is semi-redundant, yet that's appreciated.

* Remove signed Substrate TXs from Coordinator

* Only scan v2 Monero TXs

* Fix for prior commit

* Ensure canonical points in the cross-group DLEq proof

* Fix incorrect sig_hash generation

sig_hash was used as a challenge. challenges should be of the form H(R, A, m).
These sig hashes were solely H(A, m), allowing trivial forgeries.

* cargo update

Resolves an openssl advisory and nets ~-8 crates.

* Build no-std tests with RISC-V 32 IMAC

Turns out wasm still has std, making it suboptimal to use here.

* Pin setup-protoc to v2.0.0

* Update to substrate polkadot-v0.9.43

* fix tributary sync test

* Slight terminology correction in sync test

Also correct a mistake from merging the most recent polkadot version.

* Update nightly

* Replace lazy_static with OnceLock inside monero-serai

lazy_static, if no_std environments were used, effectively required always
using spin locks. This resolves the ergonomics of that while adopting Rust std
code.

no_std does still use a spin based solution. Theoretically, we could use
atomics, yet writing our own Mutex wasn't a priority.

* no-std support for monero-serai (#311)

* Move monero-serai from std to std-shims, where possible

* no-std fixes

* Make the HttpRpc its own feature, thiserror only on std

* Drop monero-rs's epee for a homegrown one

We only need it for a single function. While I tried jeffro's, it didn't work
out of the box, had three unimplemented!s, and is no where near viable for
no_std.

Fixes #182, though should be further tested.

* no-std monero-serai

* Allow base58-monero via git

* cargo fmt

* Represent RCT amounts with None, not 0.

Fixes #282.

Does allow any v1 TXs which exist, and v2 miner-TXs, to specify Some(0). As far
as I can tell, both were/are theoreitcally possible.

* Add a message queue

This is intended to be a reliable transport between the processors and
coordinator. Since it'll be intranet only, it's written as never fail.

Primarily needs testing and a proper ID.

* cargo update

Resolves https://github.com/serai-dex/serai/security/dependabot/29

* Correct deny.toml with inclusion of message-queue

* Update nightly

* std-shims: six `Read` for &[u8]

* Use serai- prefixes on Serai-specific packages

Fixes deny.toml, also runs a minor cargo update shrinking the tree.

* Update monero-tests workflow to new name for the processor

* Correct depends for processor-messages

* Disable Rust caching

We hit the cache limit after just one or two builds, making it infeasible.

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

* Monero: support for legacy transactions (#308)

* add mlsag

* fix last commit

* fix miner v1 txs

* fix non-miner v1 txs

* add borromean + fix mlsag

* add block hash calculations

* fix for the jokester that added unreduced scalars

to the borromean signature of
2368d846e671bf79a1f84c6d3af9f0bfe296f043f50cf17ae5e485384a53707b

* Add Borromean range proof verifying functionality

* Add MLSAG verifying functionality

* fmt & clippy :)

* update MLSAG, ss2_elements will always be 2

* Add MgSig proving

* Tidy block.rs

* Tidy Borromean, fix bugs in last commit, replace todo! with unreachable!

* Mark legacy EcdhInfo amount decryption as experimental

* Correct comments

* Write a new impl of the merkle algorithm

This one tries to be understandable.

* Only pull in things only needed for experimental when experimental

* Stop caching the Monero block hash now in processor that we have Block::hash

* Corrections for recent processor commit

* Use a clearer algorithm for the merkle

Should also be more efficient due to not shifting as often.

* Tidy Mlsag

* Remove verify_rct_* from Mlsag

Both methods were ports from Monero, overtly specific without clear
documentation. They need to be added back in, with documentation, or included
in a node which provides the necessary further context for them to be naturally
understandable.

* Move mlsag/mod.rs to mlsag.rs

This should only be a folder if it has multiple files.

* Replace EcdhInfo terminology

The ECDH encrypted the amount, yet this struct contained the encrypted amount,
not some ECDH.

Also corrects the types on the original EcdhInfo struct.

* Correct handling of commitment masks when scanning

* Route read_array through read_raw_vec

* Misc lint

* Make a proper RctType enum

No longer caches RctType in the RctSignatures as well.

* Replace Vec<Bulletproofs> with Bulletproofs

Monero uses aggregated range proofs, so there's only ever one Bulletproof. This
is enforced with a consensus rule as well, making this safe.

As for why Monero uses a vec, it's probably due to the lack of variadic typing
used. Its effectively an Option for them, yet we don't need an Option since we
do have variadic typing (enums).

* Add necessary checks to Eventuality re: supported protocols

* Fix for block 202612 and fix merkel root calculations

* MLSAG (de)serialisation fix

ss_2_elements will not always be 2 as rct type 1 transactions are not enforced to have one input

* Revert "MLSAG (de)serialisation fix"

This reverts commit 5e710e0c96.

here it checks number of MGs == number of inputs:
0a1eaf26f9/src/cryptonote_core/tx_verification_utils.cpp (L60-59)

and here it checks for RctTypeFull number of MGs == 1:
0a1eaf26f9/src/ringct/rctSigs.cpp (L1325)

so number of inputs == 1
so ss_2_elements == 2

* update `MlsagAggregate` comment

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>

* Fix the known issue with the DSA

I wrote it to only select TXs with a timelock, not only TXs which are unlocked.
This most likely explains why it so heavily selected coinbases.

Also moves an InternalError which would've never been hit on mainnet, yet
technically isn't an invariant, to only exist when cfg(test).

* Add a bin to download a chain, over RPC, reserializing and hashing every item

Parallelized. Doesn't check the deserialization is correct. Does use distinct,
persistent HTTP clients.

* Correct how Monero integration tests are run

* Support multiple RPCs in the reserialize_chain bin

* Don't call get_height every block

* Modify get_transactions to split requests as to not hit the restricted RPC limits

* Meaningful changes from aggressive-clippy

I do want to enable a few specific lints, yet aggressive-clippy as a whole
isn't worthwhile.

* Extend reserialize_chain with CLSAG/BP(+) verification

* Remove spammy println from reserialize_chain

* Update reserialize_chain for v1 and migration TXs

Also always marks 0-amount inputs as RCT due to impossibility of non-RCT
0-amount outputs.

* Only deserialize RctSignatures where's there at least one input

This is only enforced by the Monero protocol due to a single check the mixRing
isn't empty in get_pre_mlsag_hash. The value in ensuring there's a least one
input is to ensure the safety of our rct_type functions, which determines the
RctType based off structural analysis (specifically, input data if
MlsagBorromean).

rct_type was technically safe without this. A 0-input transaction would be
mis-classified as RctFull/MlsagAggregate, which would then make the
RctSignatures invalid for being RctFull (requiring exactly one input) yet not
having inputs, meaning an invalid RctSignatures would be mis-classified yet
still invalid.

This just removes the risk of mis-classification in the first place, tightening
the library's safety.

* docs/Getting Started.md: cargo build --release --all-features

* Fix the known instance of #295

* Bind RocksDB into serai-db

* Split up tests in CI to avoid node storage limits

* Corrections to prior commit

* Again

I called git commit --amend without calling git add . again :(

* Update the flow for completed signing processes

Now, an on-chain transaction exists. This resolves some ambiguities and
provides greater coordination.

* Clean Polyseed code

* Final tweaks

* Correct no-std builds for Polyseed

* Again correct no-std

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Co-authored-by: GitHub Actions <unknown>
Co-authored-by: Boog900 <54e72d8a-345f-4599-bd90-c6b9bc7d0ec5@aleeas.com>
Co-authored-by: Boog900 <108027008+Boog900@users.noreply.github.com>
Co-authored-by: Steven Chang <stevenchang5000@gmail.com>
2023-07-16 07:25:17 -04:00
Luke Parker
807ec30762 Update the flow for completed signing processes
Now, an on-chain transaction exists. This resolves some ambiguities and
provides greater coordination.
2023-07-14 14:05:12 -04:00
Luke Parker
5424886d63 Again
I called git commit --amend without calling git add . again :(
2023-07-14 13:13:38 -04:00
Luke Parker
2bebe0755d Corrections to prior commit 2023-07-14 13:11:01 -04:00
Luke Parker
f0ce6e6388 Split up tests in CI to avoid node storage limits 2023-07-14 13:02:58 -04:00
Luke Parker
62504b2622 Bind RocksDB into serai-db 2023-07-13 19:09:11 -04:00
Luke Parker
c9bb284570 Fix the known instance of #295 2023-07-13 14:02:57 -04:00
Steven Chang
6a4bccba74 docs/Getting Started.md: cargo build --release --all-features 2023-07-13 13:04:51 -04:00
Luke Parker
df67b7d94c 3.13 Better document the offset mapping 2023-07-10 15:02:34 -04:00
Luke Parker
677b9b681f 3.9/3.10. 3.9: Remove cast which fails on a several GB malicious TX
3.10 has its impossibility documented. A malicious RPC cananot effect this code.
2023-07-10 14:44:18 -04:00
Luke Parker
fa1b569b78 3.8 Document termination of unbounded loop 2023-07-10 14:34:32 -04:00
Luke Parker
d75115ce13 3.7 Replace unwraps with expects
Doesn't replace unwraps on integer conversions.
2023-07-10 14:02:59 -04:00
Luke Parker
3d00d405a3 3.5 (handled with 3.1) 2023-07-10 13:35:47 -04:00
Luke Parker
3480fc5e16 3.4 2023-07-10 13:33:08 -04:00
Luke Parker
7fa5d291b8 Implement a more robust validity check on connection creation 2023-07-09 15:49:35 -04:00
Luke Parker
b54548b13a Only deserialize RctSignatures where's there at least one input
This is only enforced by the Monero protocol due to a single check the mixRing
isn't empty in get_pre_mlsag_hash. The value in ensuring there's a least one
input is to ensure the safety of our rct_type functions, which determines the
RctType based off structural analysis (specifically, input data if
MlsagBorromean).

rct_type was technically safe without this. A 0-input transaction would be
mis-classified as RctFull/MlsagAggregate, which would then make the
RctSignatures invalid for being RctFull (requiring exactly one input) yet not
having inputs, meaning an invalid RctSignatures would be mis-classified yet
still invalid.

This just removes the risk of mis-classification in the first place, tightening
the library's safety.
2023-07-09 00:44:23 -04:00
Luke Parker
c878d38c60 3.1 2023-07-08 21:48:18 -04:00
Luke Parker
5d9067b84d Update reserialize_chain for v1 and migration TXs
Also always marks 0-amount inputs as RCT due to impossibility of non-RCT
0-amount outputs.
2023-07-08 21:09:22 -04:00
Luke Parker
13f48a406e Remove spammy println from reserialize_chain 2023-07-08 20:30:45 -04:00
Luke Parker
35fcd11096 Extend reserialize_chain with CLSAG/BP(+) verification 2023-07-08 20:29:55 -04:00
Luke Parker
93b1656f86 Meaningful changes from aggressive-clippy
I do want to enable a few specific lints, yet aggressive-clippy as a whole
isn't worthwhile.
2023-07-08 11:29:07 -04:00
Luke Parker
3c6cc42c23 Modify get_transactions to split requests as to not hit the restricted RPC limits 2023-07-05 22:12:34 -04:00
Luke Parker
93fe8a52dd Don't call get_height every block 2023-07-05 20:11:31 -04:00
Luke Parker
249f7b904f Support multiple RPCs in the reserialize_chain bin 2023-07-05 19:50:35 -04:00
Luke Parker
8ce8657d34 Correct how Monero integration tests are run 2023-07-05 19:11:52 -04:00
Luke Parker
e5a196504c Add a bin to download a chain, over RPC, reserializing and hashing every item
Parallelized. Doesn't check the deserialization is correct. Does use distinct,
persistent HTTP clients.
2023-07-05 18:59:22 -04:00
Luke Parker
b195db0929 Fix the known issue with the DSA
I wrote it to only select TXs with a timelock, not only TXs which are unlocked.
This most likely explains why it so heavily selected coinbases.

Also moves an InternalError which would've never been hit on mainnet, yet
technically isn't an invariant, to only exist when cfg(test).
2023-07-04 18:11:57 -04:00
Boog900
89eef95fb3 Monero: support for legacy transactions (#308)
* add mlsag

* fix last commit

* fix miner v1 txs

* fix non-miner v1 txs

* add borromean + fix mlsag

* add block hash calculations

* fix for the jokester that added unreduced scalars

to the borromean signature of
2368d846e671bf79a1f84c6d3af9f0bfe296f043f50cf17ae5e485384a53707b

* Add Borromean range proof verifying functionality

* Add MLSAG verifying functionality

* fmt & clippy :)

* update MLSAG, ss2_elements will always be 2

* Add MgSig proving

* Tidy block.rs

* Tidy Borromean, fix bugs in last commit, replace todo! with unreachable!

* Mark legacy EcdhInfo amount decryption as experimental

* Correct comments

* Write a new impl of the merkle algorithm

This one tries to be understandable.

* Only pull in things only needed for experimental when experimental

* Stop caching the Monero block hash now in processor that we have Block::hash

* Corrections for recent processor commit

* Use a clearer algorithm for the merkle

Should also be more efficient due to not shifting as often.

* Tidy Mlsag

* Remove verify_rct_* from Mlsag

Both methods were ports from Monero, overtly specific without clear
documentation. They need to be added back in, with documentation, or included
in a node which provides the necessary further context for them to be naturally
understandable.

* Move mlsag/mod.rs to mlsag.rs

This should only be a folder if it has multiple files.

* Replace EcdhInfo terminology

The ECDH encrypted the amount, yet this struct contained the encrypted amount,
not some ECDH.

Also corrects the types on the original EcdhInfo struct.

* Correct handling of commitment masks when scanning

* Route read_array through read_raw_vec

* Misc lint

* Make a proper RctType enum

No longer caches RctType in the RctSignatures as well.

* Replace Vec<Bulletproofs> with Bulletproofs

Monero uses aggregated range proofs, so there's only ever one Bulletproof. This
is enforced with a consensus rule as well, making this safe.

As for why Monero uses a vec, it's probably due to the lack of variadic typing
used. Its effectively an Option for them, yet we don't need an Option since we
do have variadic typing (enums).

* Add necessary checks to Eventuality re: supported protocols

* Fix for block 202612 and fix merkel root calculations

* MLSAG (de)serialisation fix

ss_2_elements will not always be 2 as rct type 1 transactions are not enforced to have one input

* Revert "MLSAG (de)serialisation fix"

This reverts commit 5e710e0c96.

here it checks number of MGs == number of inputs:
0a1eaf26f9/src/cryptonote_core/tx_verification_utils.cpp (L60-59)

and here it checks for RctTypeFull number of MGs == 1:
0a1eaf26f9/src/ringct/rctSigs.cpp (L1325)

so number of inputs == 1
so ss_2_elements == 2

* update `MlsagAggregate` comment

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-07-04 17:18:05 -04:00
Luke Parker
0f80f6ec7d Move location of serai-client in Cargo.toml 2023-07-04 13:21:52 -04:00
Luke Parker
740274210b cargo update
Resolves a yanked crate
2023-07-04 13:20:59 -04:00
Luke Parker
6ac57be4e3 Disable Rust caching
We hit the cache limit after just one or two builds, making it infeasible.
2023-07-03 18:03:09 -04:00
Luke Parker
08e7ca955b Correct depends for processor-messages 2023-07-03 12:40:56 -04:00
Luke Parker
239800cfcf Update monero-tests workflow to new name for the processor 2023-07-03 09:12:29 -04:00
Luke Parker
d49c636f0f Use serai- prefixes on Serai-specific packages
Fixes deny.toml, also runs a minor cargo update shrinking the tree.
2023-07-03 08:50:23 -04:00
Boog900
30834fe4d2 std-shims: six Read for &[u8] 2023-07-03 07:13:06 -04:00
GitHub Actions
d928b787f7 Update nightly 2023-07-03 07:10:53 -04:00
Luke Parker
c7b232949a Correct deny.toml with inclusion of message-queue 2023-07-03 07:09:35 -04:00
Luke Parker
acf2469dd8 cargo update
Resolves https://github.com/serai-dex/serai/security/dependabot/29
2023-07-01 20:27:03 -04:00
Luke Parker
6267acf3df Add a message queue
This is intended to be a reliable transport between the processors and
coordinator. Since it'll be intranet only, it's written as never fail.

Primarily needs testing and a proper ID.
2023-07-01 08:53:46 -04:00
Luke Parker
a95ecc2512 Represent RCT amounts with None, not 0.
Fixes #282.

Does allow any v1 TXs which exist, and v2 miner-TXs, to specify Some(0). As far
as I can tell, both were/are theoreitcally possible.
2023-06-29 13:16:51 -04:00
Luke Parker
ac708b3b2a no-std support for monero-serai (#311)
* Move monero-serai from std to std-shims, where possible

* no-std fixes

* Make the HttpRpc its own feature, thiserror only on std

* Drop monero-rs's epee for a homegrown one

We only need it for a single function. While I tried jeffro's, it didn't work
out of the box, had three unimplemented!s, and is no where near viable for
no_std.

Fixes #182, though should be further tested.

* no-std monero-serai

* Allow base58-monero via git

* cargo fmt
2023-06-29 04:14:29 -04:00
Luke Parker
d25c668ee4 Replace lazy_static with OnceLock inside monero-serai
lazy_static, if no_std environments were used, effectively required always
using spin locks. This resolves the ergonomics of that while adopting Rust std
code.

no_std does still use a spin based solution. Theoretically, we could use
atomics, yet writing our own Mutex wasn't a priority.
2023-06-28 21:45:57 -04:00
GitHub Actions
8ced63eaac Update nightly 2023-06-28 18:42:19 -04:00
Luke Parker
f6a497f3ac Slight terminology correction in sync test
Also correct a mistake from merging the most recent polkadot version.
2023-06-28 15:20:50 -04:00
akildemir
790fe7ee23 fix tributary sync test 2023-06-28 15:01:55 -04:00
Luke Parker
8c020abb86 Update to substrate polkadot-v0.9.43 2023-06-28 14:57:58 -04:00
Luke Parker
21f0bb2721 Pin setup-protoc to v2.0.0 2023-06-28 12:28:14 -04:00
Luke Parker
385ed2e97a Build no-std tests with RISC-V 32 IMAC
Turns out wasm still has std, making it suboptimal to use here.
2023-06-28 12:26:53 -04:00
Luke Parker
fca567f61d cargo update
Resolves an openssl advisory and nets ~-8 crates.
2023-06-22 06:25:33 -04:00
Luke Parker
dfa3106a38 Fix incorrect sig_hash generation
sig_hash was used as a challenge. challenges should be of the form H(R, A, m).
These sig hashes were solely H(A, m), allowing trivial forgeries.
2023-06-08 06:38:25 -04:00
Luke Parker
c6982b5dfc Ensure canonical points in the cross-group DLEq proof 2023-05-30 22:05:52 -04:00
Luke Parker
1aa293cc4a Fix for prior commit 2023-05-27 04:15:57 -04:00
Luke Parker
8a24fc39a6 Only scan v2 Monero TXs 2023-05-27 04:13:40 -04:00
Luke Parker
40b2920412 Remove signed Substrate TXs from Coordinator 2023-05-13 22:43:13 -04:00
Luke Parker
47f8766da6 Use proper messages for ValidatorSets/InInstructions pallet
Provides a DST, and associated metadata as beneficial.

Also utilizes MuSig's context to session-bind. Since set_keys_messages also
binds to set, this is semi-redundant, yet that's appreciated.
2023-05-13 04:40:16 -04:00
Luke Parker
663b5f4b50 Add a context to MuSig key aggregation 2023-05-13 04:04:14 -04:00
Luke Parker
227176e4b8 Correct various no_std definitions 2023-05-13 04:03:56 -04:00
Luke Parker
f069567f12 Use a MuSig signature to publish validator set key pairs to Serai
The processor/coordinator flow still has to be rewritten.
2023-05-13 02:15:41 -04:00
Luke Parker
84c2d73093 Do the minimal amount of work for dkg to compile under no-std
The Substrate runtime requires access to the MuSig key aggregation function.

\#279 related.
2023-05-12 23:25:17 -04:00
Luke Parker
4d50b6892c Add a dedicated function to get a MuSig key 2023-05-11 03:21:54 -04:00
Luke Parker
3eade48a6f cargo update
Resolves a yanked crate and removes some duplicated dependencies.
2023-05-10 07:34:07 -04:00
Luke Parker
89974c529a Correct 2/3rds definitions throughout the codebase
The prior formula failed for some values, such as 20.
20 / 3 = 6, * 2 = 12, + 1 = 13. 13 is 65%, not >= 67.
2023-05-10 06:29:21 -04:00
Luke Parker
ffea02dfbf Implement MuSig key aggregation into DKG
Isn't spec compliant due to the lack of a spec to be compliant too.

Slight deviation from the paper by using a unique list instead of a multiset.

Closes #186, progresses #277.
2023-05-10 06:25:40 -04:00
Luke Parker
f55e9b40e6 Have coordinator publish batches to Substrate 2023-05-10 01:46:20 -04:00
Luke Parker
a70df6a449 Remove TODO about code de-duplication
It's infeasible to write a macro/function there. Does add a type alias which
makes things cleaner.
2023-05-10 01:19:01 -04:00
Luke Parker
168f2899f0 Create a vote transaction upon GeneratedKeyPair 2023-05-10 00:46:51 -04:00
Luke Parker
c95bdb6752 Properly get genesis for a Processor message 2023-05-09 23:51:05 -04:00
Luke Parker
88f0e89350 Ensure Tributary commits are minimal 2023-05-09 23:45:05 -04:00
Luke Parker
7b7ddbdd97 Move the coordinator to a n-processor design 2023-05-09 23:44:41 -04:00
Luke Parker
9175383e89 Spawn a new async task for each block message
This probably should be done with n-long lived tasks, one per Tributary. While
this may not be suitably performant long-term (potential DoS vector), this at
least resolves the halting concerns.
2023-05-09 17:05:33 -04:00
Luke Parker
029b6c53a1 Use U448 for Ed448 instead of U512 2023-05-09 04:12:13 -04:00
Luke Parker
219adc7657 Rename uid to intent 2023-05-08 22:21:41 -04:00
Luke Parker
964fdee175 Publish ExternablBlock/SubstrateBlock, delay *Preprocess until ID acknowledged
Adds a channel for the Tributary scanner to communicate when an ID has been
acknowledged.
2023-05-08 22:20:51 -04:00
Luke Parker
a7f2740dfb Correct Serai Dockerfile 2023-05-08 02:08:31 -04:00
Luke Parker
0c9c1aeff1 Correct processor's handling of the new Monero RPC code 2023-05-02 03:40:49 -04:00
Luke Parker
adfbde6e24 Support arbitrary RPC providers in monero-serai
Sets a clean path for no-std premised RPCs (buffers to an external RPC impl)/
Tor-based RPCs/client-side load balancing/...
2023-05-02 02:39:08 -04:00
Luke Parker
5765d1d278 Update to May's nightly
Doesn't use the PR due to the needed changes.
2023-05-01 04:58:50 -04:00
Luke Parker
78c00bde3d Correct error message in ff-group-tests 2023-05-01 03:18:11 -04:00
Luke Parker
c0001f5ff2 Update to substrate polkadot-v0.9.42 2023-05-01 03:17:37 -04:00
Luke Parker
6032af6692 Have Coordinator MainDb take a mutable borrow 2023-04-26 00:10:06 -04:00
Luke Parker
7824b6cb8b Document the processor/tributary/coordinator/serai flow 2023-04-25 15:05:58 -04:00
Luke Parker
78d5372fb7 Initial code to handle messages from processors 2023-04-25 03:14:42 -04:00
Luke Parker
cc531d630e Add a UID function to messages
When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.

Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.
2023-04-25 02:46:18 -04:00
Luke Parker
09d96822ca Document a pair of panics requiring 256 GB of RAM/4 GB of a context 2023-04-24 23:49:06 -04:00
Luke Parker
7a8f8c2d3d Document panic in FROST 2023-04-24 23:19:23 -04:00
Luke Parker
e74b4ab94f Add a TributaryReader which doesn't require a borrow to operate
Reduces lock contention.

Additionally changes block_key to include the genesis. While not technically
needed, the lack of genesis introduced a side effect where any Tributary on the
the database could return the block of any other Tributary. While that wasn't a
security issue, returning it suggested it was on-chain when it wasn't. This may
have been usable to create issues.
2023-04-24 07:02:00 -04:00
Luke Parker
e0820759c0 Tweak tests workflow 2023-04-24 06:16:43 -04:00
Luke Parker
2feebe536e Test handle_p2p and Tributary syncing
Includes bug fixes.
2023-04-24 03:30:19 -04:00
Luke Parker
cc491ee1e1 Don't return from sync_block until the Tendermint machine returns if it's valid or not
We had a race condition where'd we be informed of blocks 1 .. 3, and
immediately add 1 .. 3. Because we immediately tried to add 2 after 1, it'd
fail since the tip was still the genesis, yet 2 needs the tip to be 1.

Adding a channel, while ugly, was the simplest way to accomplish this.

Also has any added block be broadcasted. Else there's a race condition where a
node which syncs up to the most recent block does so, yet fails to add the next
block when it's committed to.
2023-04-24 02:46:13 -04:00
Luke Parker
14388e746c Implement Tributary syncing
Also adds a forwards-lookup to the Tributary blockchain.
2023-04-24 00:53:18 -04:00
Luke Parker
215155f84b Remove reliance on a blockchain read lock from block/commit 2023-04-23 23:51:10 -04:00
Luke Parker
c476f9b640 Break coordinator main into multiple functions
Also moves from std::sync::RwLock to tokio::sync::RwLock to prevent wasting
cycles on spinning.
2023-04-23 23:15:15 -04:00
Luke Parker
be8c25aef0 Move json word lists to rs
Allows building the seed code without serde_json.
2023-04-23 22:26:05 -04:00
Luke Parker
fb296a9c2e cargo update 2023-04-23 19:21:24 -04:00
Luke Parker
aa0ec4ac41 cargo fmt 2023-04-23 18:56:48 -04:00
Luke Parker
05b1fc5f05 Send a heartbeat message when a Tributary falls behind 2023-04-23 18:55:43 -04:00
Luke Parker
72633d6421 Clarify Arc RwLocks and sleeps in coordinator 2023-04-23 18:29:50 -04:00
Luke Parker
ad5522d854 Start handling P2P messages
This defines the tart of a very complex series of locks I'm really unhappy
with. At the same time, there's not immediately a better solution. This also
should work without issue.
2023-04-23 17:01:30 -04:00
Luke Parker
f2d9d70068 Reload Tributaries
add_active_tributary writes the spec to disk before it returns, so even if the
VecDeque it pushes to isn't popped, the tributary will still be loaded on boot.
2023-04-23 04:31:00 -04:00
Luke Parker
2b09309adc Handle adding new Tributaries
Removes last_block as an argument from Tendermint. It now loads from the DB as
needed. While slightly less performant, it's easiest and should be fine.
2023-04-23 03:51:26 -04:00
Luke Parker
bf9ec410db Additionally test DKGShares 2023-04-23 02:18:46 -04:00
Luke Parker
e0dc5d29ad Tributary test wait_for_tx_inclusion function 2023-04-23 01:52:19 -04:00
Luke Parker
710e6e5217 Add Transaction::sign.
While I don't love the introduction of empty_signed, it's practically fine.
2023-04-23 01:25:45 -04:00
Luke Parker
3f6565588f Test handling of DKG commitments transactions 2023-04-23 01:00:46 -04:00
Luke Parker
af84b7f707 Add a test for Tributary
Further fleshes out the Tributary testing code.
2023-04-22 22:28:20 -04:00
Luke Parker
8c74576cf0 Add a test to the coordinator for running a Tributary
Impls a LocalP2p for testing.

Moves rebroadcasting into Tendermint, since it's what knows if a message is
fully valid + original.

Removes TributarySpec::validators() HashMap, as its non-determinism caused
different instances to have different round robin schedules. It was already
prior moved to a Vec for this issue, so I'm unsure why this remnant existed.

Also renames the GH no-std workflow from the prior commit.
2023-04-22 10:49:52 -04:00
Luke Parker
1e448dec21 Add no_std support to transcript, dalek-ff-group, ed448, ciphersuite, multiexp, schnorr, and monero-generators
transcript, dalek-ff-group, ed449, and ciphersuite are all usable with no_std
alone. The rest additionally require alloc.

Part of #279.
2023-04-22 04:38:47 -04:00
Luke Parker
ef0c901455 Add recent bloat checks added to signer to substrate_signer as well 2023-04-20 15:45:32 -04:00
Luke Parker
09c3c9cc9e Route the SubstrateBlock message, which is the last Tributary transaction type 2023-04-20 15:37:22 -04:00
Luke Parker
a404944b90 Add a SubstrateBlockAck message to the processor
When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.

This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.

Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.

This creates an explicitly proper async flow not reliant on waiting for data
availability.

Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.
2023-04-20 15:26:22 -04:00
Luke Parker
70d866af6a ExternalBlock handler 2023-04-20 14:51:33 -04:00
Luke Parker
f99a91b34d Slash on unrecognized ID 2023-04-20 14:33:19 -04:00
Luke Parker
294ad08e00 Add support for multiple orderings in Provided
Necessary as our Tributary chains needed to agree when a Serai block has
occurred, and when a Monero block has occurred. Since those could happen at the
same time, some validators may put SeraiBlock before ExternalBlock and vice
versa, causing a chain halt. Now they can have distinct ordering queues.
2023-04-20 07:32:40 -04:00
Luke Parker
a26ca1a92f Split FinalizedBlock into ExternalBlock and SeraiBlock
Also re-arranges their orders.
2023-04-20 06:59:42 -04:00
Luke Parker
9c2a44f9df Apply DKG TX handling code to all sign TXs
The existing code was almost entirely applicable. It just needed to be scoped
with an ID. While the handle function is now a bit convoluted, I don't see a
better option.
2023-04-20 06:27:05 -04:00
Luke Parker
8b5eaa8092 Add additional checks to key_gen/sign
There is the ability to cause state bloat by flooding Tributary.
KeyGen/Sign specifically shouldn't allow bloat since we check the
commitments/preprocesses/shares for validity. Accordingly, any invalid data
(such as bloat) should be detected.

It was posssible to place bloat after the valid data. Doing so would be
considered a valid KeyGen/Sign message, yet could add up to 50k kB per sign.
2023-04-20 05:36:48 -04:00
Luke Parker
8041a0d845 Initial Tributary handling 2023-04-20 05:05:17 -04:00
Luke Parker
9e1f3fc85c Make MainDB into SubstrateDB 2023-04-20 05:04:08 -04:00
Luke Parker
ee65e4df8f Resolve #68
Notably speeds up monero-serai's build and CLSAG performance.
2023-04-20 01:18:16 -04:00
Luke Parker
ff2febe5aa Move the entirety of ed448 to Residue, offering a further 2-4x speedup 2023-04-19 04:02:59 -04:00
Luke Parker
334873b6a5 Use crypto-bigint's reduction in ed448
Achieves feasible performance in the ed448 which makes it potentially viable
for real world usage.

Accordingly prepares a new release, updating the README.
2023-04-19 03:35:57 -04:00
Luke Parker
21026136bd Save keys by their tweaked group_key
Keys are referred to by their tweaked versions. If a tweak was needed, keys
would fail to confirm.
2023-04-18 14:55:15 -04:00
Luke Parker
396e5322b4 Code a method to determine the activation block before any block has consensus
[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.

This doesn't use an Option for ergonomic reasons.
2023-04-18 03:04:52 -04:00
Luke Parker
9da0eb69c7 Use an enum for Coin/NetworkId
It originally wasn't an enum so software which had yet to update before an
integration wouldn't error (as now enums are strictly typed). The strict typing
is preferable though.
2023-04-18 02:04:47 -04:00
Luke Parker
6f3b5f4535 Tweak ConfirmKeyPair to alleviate database requirements of coordinator 2023-04-18 01:09:22 -04:00
Luke Parker
e880ebb5a9 Clarify safety of Scanner::block_number and KeyGen::keys 2023-04-18 00:26:19 -04:00
Luke Parker
1036e673ce cargo update to remove usage of yanked crate 2023-04-17 23:59:04 -04:00
Luke Parker
fd1bbec134 Use a single txn for an entire coordinator message
Removes direct DB accesses whre possible. Documents the safety of the rest.
Does uncover one case of unsafety not previously noted.
2023-04-17 23:55:12 -04:00
Luke Parker
7579c71765 Add note to processor_messages 2023-04-17 23:11:44 -04:00
Luke Parker
5a499de4ca Remove BatchSigned
SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.
2023-04-17 20:19:15 -04:00
Luke Parker
e26b861d25 Move ConfirmKeyPair from key_gen to substrate
Clarifies the emitter and accordingly why its mutations are justified.
2023-04-17 19:40:17 -04:00
Luke Parker
059e79c98a Add extensive commentary on mutable to the processor's main file
Clearly establishes why consistency is guaranteed from a Rust borrow-checker
mindset. While there are plenty of... 'violations', they're clearly explained.

Hopefully, this method of thinking helps promote/ensure consistency in the
future.
2023-04-17 19:24:02 -04:00
Luke Parker
92a868e574 Add a processor API to the coordinator 2023-04-17 02:10:33 -04:00
Luke Parker
595cd6d404 Rename transaction file to tributary, add function for genesis 2023-04-17 02:09:29 -04:00
Luke Parker
4d43c04916 Clean up the Substrate block processing code 2023-04-17 00:50:56 -04:00
Luke Parker
2604746586 Fill out code for the rest of the Substrate events 2023-04-16 03:18:52 -04:00
Luke Parker
36cdf6d4bf Have InInstructions track the latest block for a network in storage 2023-04-16 02:57:19 -04:00
Luke Parker
9676584ffe Resolve #245 2023-04-16 01:03:32 -04:00
Luke Parker
79655672ef Make progres on handling NewSet events
Further bones out the coordinator.
2023-04-16 00:51:56 -04:00
Luke Parker
fa2cf03e61 Support extracting timestamps from blocks 2023-04-16 00:31:54 -04:00
Luke Parker
92ad689c7e cargo update
Since p256 now pulls in an extra crate with this update, the {k,p}256 imports
disable default-features to prevent growing the tree.
2023-04-15 23:21:18 -04:00
Luke Parker
b2169a7316 cargo +nightly fmt 2023-04-15 23:09:39 -04:00
Luke Parker
e2571a43aa Correct processor flow to have the coordinator decide signing set/re-attempts
The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.

Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.
2023-04-15 23:01:07 -04:00
Luke Parker
e21fc5ff3c Merge AckBlock with Burns
Offers greater efficiency while reducing concerns re: atomicity.
2023-04-15 18:38:40 -04:00
Luke Parker
eafd054296 Start defining the coordinator 2023-04-15 17:38:47 -04:00
Luke Parker
51bf51ae1e Make unsigned private due to unsafe calling potential 2023-04-15 16:37:59 -04:00
Luke Parker
28b6bc99ac Update to the latest subxt
Writes a custom unsigned extrinic creator due to subxt having an internal error
with the scale metadata. While the code in our scope increased, it's much more
ergonomic to our usage. We may end up rewriting most of subxt, eventually.
2023-04-15 05:23:57 -04:00
Luke Parker
ce883104b7 cargo update 2023-04-15 03:27:45 -04:00
Luke Parker
f48022c6eb Add basic getters to tributary 2023-04-15 00:41:48 -04:00
Luke Parker
124b994c23 Add a NewSet event to validator-sets
Updates to the latest serai-dex/substrate due to depending on
10ccaca0eb498a2316bbf627d419b29b1a75933a.
2023-04-15 00:40:33 -04:00
Luke Parker
2e2bc59703 Support reloading the mempool from disk 2023-04-14 15:51:56 -04:00
Luke Parker
c032f66f8a must_use annotations on DbTxn 2023-04-14 15:04:26 -04:00
Luke Parker
695d923593 Reloaded provided transactions from the disk
Also resolves a race condition by asserting provided transactions must be
unique, allowing them to be safely provided multiple times.
2023-04-14 15:03:01 -04:00
Luke Parker
63318cb728 Add a DB to Tributary
Adds support for reloading most of the blockchain.
2023-04-14 14:11:40 -04:00
Luke Parker
6f6c9f7cdf Add a dedicated db crate with a basic DB trait
It's needed by the processor and tributary (coordinator).
2023-04-14 11:47:43 -04:00
Luke Parker
04e7863dbd Documentation and cargo update 2023-04-13 21:06:11 -04:00
Luke Parker
a5002c50ec Fix the scheduler from dropping UTXOs when there weren't any payments 2023-04-13 20:59:36 -04:00
Luke Parker
72dd665ebf Add DoS limits to tributary and require provided transactions be ordered 2023-04-13 20:35:55 -04:00
Luke Parker
8b1bce6abd Add correction the last commit missed 2023-04-13 18:47:34 -04:00
Luke Parker
e73a51bfa5 Finish binding Tendermint into Tributary and define a Tributary master object 2023-04-13 18:43:27 -04:00
Luke Parker
5858b6c03e Replace Tendermint step with sync_block
Step moved a step forward after an externally synced/added block. This created
a race condition to add the block between the sync process and the Tendermint
machine. Now that the block routes through Tendermint, there is no such race
condition.
2023-04-13 18:18:29 -04:00
Luke Parker
9bea368d36 Plan scheduled payments whenever outputs are received
The scheduler prior waited for the next series of payments to be added.
2023-04-13 15:41:56 -04:00
Luke Parker
a509dbfad6 Embed the mempool into the Blockchain 2023-04-13 09:47:14 -04:00
Luke Parker
03a6470a5b Finish binding Tendermint, bar the P2P layer 2023-04-12 18:04:28 -04:00
Luke Parker
997dd611d5 Don't add blocks which aren't valid
Previously, Tendermint needed to be live more than it needed to be correct.
Under the original intention for it, correctness would fail if any coin
desynced, which would cause the node to fail entirely. By accepting a
supermajority's view of state, despite its own, a single coin's failure would
only lead to inability to participate with that single coin.

Now that Tendermint is solely for Tributary, nodes should halt a coin-specific
chain if their view of the chain differs. They are unable to meaningless
participate regardless.

This also means a supermajority of validators can no longer fake messages from
other validators, allowing the Tributary chain to use uniform weights with much
less impact. There is still enough impact they can't be used (ability to cause
a fork), yet they should allow uniform block production (as that's solely a DoS
concern).

While we prior could've simply additionally checked signatures, add_block's
lack of a failure case would've meant it had to panic. This would've been a DoS
possible a minority-weight *which affected the entire coordinator* and
therefore *the entire validator for all coins*.
2023-04-12 16:18:42 -04:00
Luke Parker
86cbf6e02e Bind the signature scheme for tendermint-machine 2023-04-12 16:06:14 -04:00
Luke Parker
8c8232516d Only allow designated participants to send transactions 2023-04-12 12:42:23 -04:00
Luke Parker
be947ce152 Add a mempool 2023-04-12 12:15:38 -04:00
Luke Parker
7c7f17aac6 Test the blockchain 2023-04-12 11:13:48 -04:00
Luke Parker
ff5c240fcc Fix a bug in the merkle algorithm 2023-04-12 10:52:28 -04:00
Luke Parker
d5a12a9b97 Make TransactionKind have a reference to Signed
Broken commit due to partial staging of one file.
2023-04-12 09:38:20 -04:00
Luke Parker
354ac856a5 Extensively test transactions 2023-04-12 08:51:40 -04:00
Luke Parker
402a7be966 Block contructor and tests 2023-04-11 20:24:27 -04:00
Luke Parker
119d25be49 Clarify transaction length sizing 2023-04-11 19:18:26 -04:00
Luke Parker
2cfee536f6 Define all coordinator transaction types 2023-04-11 19:04:53 -04:00
Luke Parker
90f67b5e54 Slight merkle improvements 2023-04-11 19:04:27 -04:00
Luke Parker
4d17b922fe Sign the genesis when signing transactions
Prevents replaying across tributaries, which is a risk for BTC/ETH (regarding key gen).
2023-04-11 19:03:52 -04:00
Luke Parker
7488d23e0d Add basic transaction/block code to Tributary 2023-04-11 13:42:18 -04:00
Luke Parker
a290b74805 Tweak processor's slice handling due to a CI failure
The prior code worked without issue for me locally, but apparently it didn't
always.
2023-04-11 10:37:50 -04:00
Luke Parker
61757d5e19 Remove the substrate feature from tendermint 2023-04-11 10:34:41 -04:00
Luke Parker
09f8ac37c4 Create a folder for tributary, the micro-blockchain
Moves tendermint again, this time under tributary.
2023-04-11 10:18:31 -04:00
Luke Parker
c46cf47736 Move tendermint under the coordinator
We're planning to use it in the micro-blockchain the coordinator will run.
2023-04-11 09:28:32 -04:00
Luke Parker
defce32ff1 Remove k256/p256 git revision patch
New releases of k256 and p256 make it no longer necessary.
2023-04-11 09:23:57 -04:00
Luke Parker
de52c4db7f Add empty coordinator 2023-04-11 09:21:35 -04:00
Luke Parker
d74cbe2cce Have the Scanner assign batch IDs 2023-04-11 08:47:15 -04:00
Luke Parker
caa695511b Improve log statements in processor 2023-04-11 06:06:17 -04:00
Luke Parker
7538c10159 Update processor README 2023-04-11 05:53:19 -04:00
Luke Parker
90f2b03595 Finish routing eventualities
Also corrects some misc TODOs and tidies up some log statements.
2023-04-11 05:49:27 -04:00
Luke Parker
9e78c8fc9e Test the processor's Substrate signer 2023-04-10 12:48:48 -04:00
Luke Parker
d323fc8b7b Handle signing batches in the processor
Duplicates the existing signer for one tailored to batch signing.
2023-04-10 11:11:46 -04:00
Luke Parker
82c34dcc76 Implement a FROST variant of Schnorrkel (#274)
* Minor lint

* Update frost-schnorrkel to the latest modular-frost

* Tidy up the schnorrkel library
2023-04-10 06:05:17 -04:00
Luke Parker
bc19975a8a Update Bitcoin confirmations from 3 to 6
While Bitcoin practically doesn't have long re-orgs, it is possible for a
single miner to build a long chain. Recently, a miner found 5 blocks in a row,
which would be enough to re-org a transaction Serai considered finalized.
2023-04-10 02:51:44 -04:00
Luke Parker
b9f38fb354 Update processor message flow around the new SignedBatch flow 2023-04-10 02:51:36 -04:00
Luke Parker
ccec529cee cargo update
Removes yanked crate.
2023-04-10 00:27:17 -04:00
Luke Parker
1c31ca7187 Correct deny.toml 2023-04-09 02:34:31 -04:00
Luke Parker
f6206b60ec Update to bitcoin 0.30
Also performs a general update with a variety of upgraded Substrate depends.
2023-04-09 02:31:13 -04:00
Luke Parker
96525330c2 cargo update 2023-04-08 04:44:28 -04:00
Luke Parker
7abc8f19cd Move substrate/serai/* to substrate/* 2023-04-08 03:01:14 -04:00
GitHub Actions
bd06b95c05 Update nightly 2023-04-01 05:44:42 -04:00
Luke Parker
648d237df5 Finish updating to the latest Rust/handle broken cargo update 2023-04-01 05:44:18 -04:00
Luke Parker
3f4bab7f7b cargo update
Resolves yanked crate.
2023-03-31 23:15:32 -04:00
Luke Parker
426346dd5a Have the processor DKG output a Ristretto key
This will be used to sign InInstructions.
2023-03-31 10:15:07 -04:00
Luke Parker
a4f64e2651 Expand work done in provide_batch 2023-03-31 08:13:45 -04:00
Luke Parker
6fa405a728 Update Monero README 2023-03-31 07:02:57 -04:00
Luke Parker
ae4e98c052 Verify Batch signatures
Starts further fleshing out the Serai client tests with common utils.
2023-03-31 06:34:09 -04:00
Luke Parker
30b8636641 Update to the latest Substrate commit
Enables building with only the stable toolchain. The nightly toolchain is still
used for clippy in order to access additional checks.
2023-03-31 02:34:52 -04:00
Luke Parker
1610383649 Test validator set's voting on a key
Needed for the in-instructions pallet to verify in-instructions are
appropriately signed and continue developing that.

Fixes a bug in the validator-sets pallet, moves several items from the pallet
to primitives.
2023-03-30 20:32:05 -04:00
Luke Parker
9615caf3bb Move validator-sets from Key to (RistrettoPublic, Key)
Part of #241.
2023-03-30 20:32:05 -04:00
Luke Parker
8a70416fd0 Attempt to the fix Bitcoin CI's cache statement 2023-03-28 07:33:11 -04:00
Luke Parker
4f28a38ce1 Implement #212 2023-03-28 05:34:21 -04:00
Luke Parker
47be373eb0 Resolve #268 by adding a Zeroize to DigestTranscript which writes a full block
This is a 'better-than-nothing' attempt to invalidate its state.

Also replaces black_box features with usage of the rustversion crate.
2023-03-28 04:43:10 -04:00
Luke Parker
79aff5d4c8 ff 0.13 (#269)
* Partial move to ff 0.13

It turns out the newly released k256 0.12 isn't on ff 0.13, preventing further
work at this time.

* Update all crates to work on ff 0.13

The provided curves still need to be expanded to fit the new API.

* Finish adding dalek-ff-group ff 0.13 constants

* Correct FieldElement::product definition

Also stops exporting macros.

* Test most new parts of ff 0.13

* Additionally test ff-group-tests with BLS12-381 and the pasta curves

We only tested curves from RustCrypto. Now we test a curve offered by zk-crypto,
the group behind ff/group, and the pasta curves, which is by Zcash (though
Zcash developers are also behind zk-crypto).

* Finish Ed448

Fully specifies all constants, passes all tests in ff-group-tests, and finishes moving to ff-0.13.

* Add RustCrypto/elliptic-curves to allowed git repos

Needed due to k256/p256 incorrectly defining product.

* Finish writing ff 0.13 tests

* Add additional comments to dalek

* Further comments

* Update ethereum-serai to ff 0.13
2023-03-28 04:38:01 -04:00
Luke Parker
a9f6300e86 Remove unused dependencies from runtime/node 2023-03-26 23:10:16 -04:00
Luke Parker
ff70cbb223 Remove the genesis volume added for Tendermint 2023-03-26 20:20:32 -04:00
Luke Parker
17818c2a02 Default to the wasm executor
https://github.com/paritytech/substrate/issues/ 10579 has the rationale for this.
2023-03-26 18:57:49 -04:00
Luke Parker
aea6ac104f Remove Tendermint for GRANDPA
Updates to polkadot-v0.9.40, with a variety of dependency updates accordingly.
Substrate thankfully now uses k256 0.13, pathing the way for #256. We couldn't
upgrade to polkadot-v0.9.40 without this due to polkadot-v0.9.40 having
fundamental changes to syncing. While we could've updated tendermint, it's not
worth the continued development effort given its inability to work with
multiple validator sets.

Purges sc-tendermint. Keeps tendermint-machine for #163.

Closes #137, #148, #157, #171. #96 and #99 should be re-scoped/clarified. #134
and #159 also should be clarified. #169 is also no longer a priority since
we're only considering temporal deployments of tendermint. #170 also isn't
since we're looking at effectively sharded validator sets, so there should
be no singular large set needing high performance.
2023-03-26 16:49:18 -04:00
Luke Parker
534e1bb11d Fix Monero's Extra::fee_weight and handling of data limits 2023-03-26 03:43:51 -04:00
Luke Parker
c182b804bc Move in instructions from inherent transactions to unsigned transactions
The original intent was to use inherent transactions to prevent needing to vote
on-chain, which would spam the chain with worthless votes. Inherent
transactions, and our Tendermint library, would use the BFT's processs voting
to also vote on all included transactions. This perfectly collapses integrity
voting creating *no additional on-chain costs*.

Unfortunately, this led to issues such as #6, along with questions of validator
scalability when all validators are expencted to participate in consensus (in
order to vote on if the included instructions are valid). This has been
summarized in #241.

With this change, we can remove Tendermint from Substrate. This greatly
decreases our complexity. While I'm unhappy with the amount of time spent on
it, just to reach this conclusion, thankfully tendermint-machine itself is
still usable for #163. This also has reached a tipping point recently as the
polkadot-v0.9.40 branch of substrate changed how syncing works, requiring
further changes to sc-tendermint. These have no value if we're just going to
get rid of it later, due to fundamental design issues, yet I would like to
keep Substrate updated.

This should be followed by moving back to GRANDPA, enabling closing most open
Tendermint issues.

Please note the current in-instructions-pallet does not actually verify the
included signature yet. It's marked TODO, despite this bing critical.
2023-03-26 02:58:04 -04:00
Luke Parker
9157f8d0a0 Update procesor/correct prior commit 2023-03-25 04:06:25 -04:00
Luke Parker
839734354a Update Getting Started 2023-03-25 01:44:07 -04:00
Luke Parker
d954e67238 Ensure InInstruction data is properly limited
Bitcoin didn't check, assuming data was <= 80 bytes thanks to being in
OP_RETURN. An additional global check has been added.
2023-03-25 01:36:28 -04:00
Luke Parker
6a981dae6e Make Validator Set Network a first-class property
There already should only be one validator set operating per network. This
formalizes that. Then, validator sets used to be able to operate over multiple
networks. That is no longer possible.

This formalization increases validator set flexibility while also allowing the
ability to formalize the definiton of tokens (which is necessary to define a
gas asset).
2023-03-25 01:30:53 -04:00
Luke Parker
397d79040c Update monero-serai to limit the size of TX extra 2023-03-25 01:26:42 -04:00
Luke Parker
293731f739 cargo update 2023-03-25 00:45:33 -04:00
Luke Parker
8447021ba1 Add a way to check if blocks completed eventualities 2023-03-22 22:45:41 -04:00
Luke Parker
11a0803ea5 Make the bitcoin Algorithm test a unit test 2023-03-21 18:50:23 -04:00
Luke Parker
d58a7b0ebf cargo fmt 2023-03-20 20:43:52 -04:00
Luke Parker
952cf280c2 Bump crate versions 2023-03-20 20:34:41 -04:00
Luke Parker
8d4d630e0f Fully document crypto/ 2023-03-20 20:10:00 -04:00
Luke Parker
e1bb2c191b Correct audit file upload
Git had replaced the 'line endings'.
2023-03-20 17:35:45 -04:00
Luke Parker
df2bb79a53 Clarify further changes have not been audited 2023-03-20 16:24:04 -04:00
Luke Parker
515587406f Finish testing bitcoin-serai 2023-03-20 05:47:07 -04:00
Luke Parker
7fc8630d39 Test bitcoin-serai
Also resolves a few rough edges.
2023-03-20 04:46:27 -04:00
Luke Parker
6a2a353b91 cargo fmt 2023-03-20 01:12:09 -04:00
Luke Parker
66eaf6ab61 Remove the code for the CI to spawn a Serai node
The serai-client test runner controls the node on its end.

Also bumps the Monero version.
2023-03-20 01:07:43 -04:00
Luke Parker
597122b2e0 Add a Scanner to bitcoin-serai
Moves the processor to it. This ends up as a net-neutral LoC change to the
processor, unfortunately, yet this makes bitcoin-serai safer/easier to use, and
increases the processor's usage of bitcoin-serai.

Also re-organizes bitcoin-serai a bit.
2023-03-20 01:03:39 -04:00
Luke Parker
0aa6b561b7 Bitcoin SpendableOutput::new 2023-03-19 23:22:56 -04:00
Luke Parker
60ca3a9599 Have SeraiError::RpcError include the subxt Error
Part of debugging #262.
2023-03-19 22:02:30 -04:00
Luke Parker
59891594aa Fix processor's determionation of protocol to support integration tests
I'm really unhappy with a cfg(test) within the codebase. The double checking of
it makes it tolerable though, especially when compared to dropping these tests.
2023-03-19 21:05:13 -04:00
Luke Parker
2fdf8f8285 Remove unused import 2023-03-17 23:59:46 -04:00
Luke Parker
55e0253225 Again tweak timeouts for #260 2023-03-17 23:45:46 -04:00
Luke Parker
918cce3494 Add a proper error to Bitcoin's SignableTransaction::new
Also adds documentation to various parts of bitcoin.
2023-03-17 23:43:32 -04:00
Luke Parker
6ac570365f Fix #260
The issue was the 10s timeouts were too fast for the CI runner, since the
Scanner only polls every five seconds (already cutting into the window).
2023-03-17 21:38:09 -04:00
Luke Parker
0525ba2f62 Document Bitcoin RPC and make it more robust 2023-03-17 21:25:38 -04:00
Luke Parker
9b47ad56bb Create a dedicated Algorithm for Bitcoin Schnorr 2023-03-17 20:47:42 -04:00
Luke Parker
5e771b1bea Run bitcoin as daemon 2023-03-17 15:32:05 -04:00
Luke Parker
9952c67d98 Update crypto-bigint to 0.5 2023-03-17 15:31:04 -04:00
Luke Parker
f2218b4d4e Attempt to fix Bitcoin node CI 2023-03-17 13:37:18 -04:00
Luke Parker
780b79c3d8 Properly run processor Monero tests
Since it wasn't being compiled with the Monero feature, it wasn't running the
Monero tests.
2023-03-17 13:33:50 -04:00
Luke Parker
ba82dac18c Processor (#259)
* Initial work on a message box

* Finish message-box (untested)

* Expand documentation

* Embed the recipient in the signature challenge

Prevents a message from A -> B from being read as from A -> C.

* Update documentation by bifurcating sender/receiver

* Panic on receiving an invalid signature

If we've received an invalid signature in an authenticated system, a 
service is malicious, critically faulty (equivalent to malicious), or 
the message layer has been compromised (or is otherwise critically 
faulty).

Please note a receiver who handles a message they shouldn't will trigger 
this. That falls under being critically faulty.

* Documentation and helper methods

SecureMessage::new and SecureMessage::serialize.

Secure Debug for MessageBox.

* Have SecureMessage not be serialized by default

Allows passing around in-memory, if desired, and moves the error from 
decrypt to new (which performs deserialization).

Decrypt no longer has an error since it panics if given an invalid 
signature, due to this being intranet code.

* Explain and improve nonce handling

Includes a missing zeroize call.

* Rebase to latest develop

Updates to transcript 0.2.0.

* Add a test for the MessageBox

* Export PrivateKey and PublicKey

* Also test serialization

* Add a key_gen binary to message_box

* Have SecureMessage support Serde

* Add encrypt_to_bytes and decrypt_from_bytes

* Support String ser via base64

* Rename encrypt/decrypt to encrypt_bytes/decrypt_to_bytes

* Directly operate with values supporting Borsh

* Use bincode instead of Borsh

By staying inside of serde, we'll support many more structs. While 
bincode isn't canonical, we don't need canonicity on an authenticated, 
internal system.

* Turn PrivateKey, PublicKey into structs

Uses Zeroizing for the PrivateKey per #150.

* from_string functions intended for loading from an env

* Use &str for PublicKey from_string (now from_str)

The PrivateKey takes the String to take ownership of its memory and 
zeroize it. That isn't needed with PublicKeys.

* Finish updating from develop

* Resolve warning

* Use ZeroizingAlloc on the key_gen binary

* Move message-box from crypto/ to common/

* Move key serialization functions to ser

* add/remove functions in MessageBox

* Implement Hash on dalek_ff_group Points

* Make MessageBox generic to its key

Exposes a &'static str variant for internal use and a RistrettoPoint 
variant for external use.

* Add Private to_string as deprecated

Stub before more competent tooling is deployed.

* Private to_public

* Test both Internal and External MessageBox, only use PublicKey in the pub API

* Remove panics on invalid signatures

Leftover from when this was solely internal which is now unsafe.

* Chicken scratch a Scanner task

* Add a write function to the DKG library

Enables writing directly to a file.

Also modifies serialize to return Zeroizing<Vec<u8>> instead of just Vec<u8>.

* Make dkg::encryption pub

* Remove encryption from MessageBox

* Use a 64-bit block number in Substrate

We use a 64-bit block number in general since u32 only works for 120 years
(with a 1 second block time). As some chains even push the 1 second threshold,
especially ones based on DAG consensus, this becomes potentially as low as 60
years.

While that should still be plenty, it's not worth wondering/debating. Since
Serai uses 64-bit block numbers elsewhere, this ensures consistency.

* Misc crypto lints

* Get the scanner scratch to compile

* Initial scanner test

* First few lines of scheduler

* Further work on scheduler, solidify API

* Define Scheduler TX format

* Branch creation algorithm

* Document when the branch algorithm isn't perfect

* Only scanned confirmed blocks

* Document Coin

* Remove Canonical/ChainNumber from processor

The processor should be abstracted from canonical numbers thanks to the
coordinator, making this unnecessary.

* Add README documenting processor flow

* Use Zeroize on substrate primitives

* Define messages from/to the processor

* Correct over-specified versioning

* Correct build re: in_instructions::primitives

* Debug/some serde in crypto/

* Use a struct for ValidatorSetInstance

* Add a processor key_gen task

Redos DB handling code.

* Replace trait + impl with wrapper struct

* Add a key confirmation flow to the key gen task

* Document concerns on key_gen

* Start on a signer task

* Add Send to FROST traits

* Move processor lib.rs to main.rs

Adds a dummy main to reduce clippy dead_code warnings.

* Further flesh out main.rs

* Move the DB trait to AsRef<[u8]>

* Signer task

* Remove a panic in bitcoin when there's insufficient funds

Unchecked underflow.

* Have Monero's mine_block mine one block, not 10

It was initially a nicety to deal with the 10 block lock. C::CONFIRMATIONS
should be used for that instead.

* Test signer

* Replace channel expects with log statements

The expects weren't problematic and had nicer code. They just clutter test
output.

* Remove the old wallet file

It predates the coordinator design and shouldn't be used.

* Rename tests/scan.rs to tests/scanner.rs

* Add a wallet test

Complements the recently removed wallet file by adding a test for the scanner,
scheduler, and signer together.

* Work on a run function

Triggers a clippy ICE.

* Resolve clippy ICE

The issue was the non-fully specified lambda in signer.

* Add KeyGenEvent and KeyGenOrder

Needed so we get KeyConfirmed messages from the key gen task.

While we could've read the CoordinatorMessage to see that, routing through the
key gen tasks ensures we only handle it once it's been successfully saved to
disk.

* Expand scanner test

* Clarify processor documentation

* Have the Scanner load keys on boot/save outputs to disk

* Use Vec<u8> for Block ID

Much more flexible.

* Panic if we see the same output multiple times

* Have the Scanner DB mark itself as corrupt when doing a multi-put

This REALLY should be a TX. Since we don't have a TX API right now, this at
least offers detection.

* Have DST'd DB keys accept AsRef<[u8]>

* Restore polling all signers

Writes a custom future to do so.

Also loads signers on boot using what the scanner claims are active keys.

* Schedule OutInstructions

Adds a data field to Payment.

Also cleans some dead code.

* Panic if we create an invalid transaction

Saves the TX once it's successfully signed so if we do panic, we have a copy.

* Route coordinator messages to their respective signer

Requires adding key to the SignId.

* Send SignTransaction orders for all plans

* Add a timer to retry sign_plans when prepare_send fails

* Minor fmt'ing

* Basic Fee API

* Move the change key into Plan

* Properly route activation_number

* Remove ScannerEvent::Block

It's not used under current designs

* Nicen logs

* Add utilities to get a block's number

* Have main issue AckBlock

Also has a few misc lints.

* Parse instructions out of outputs

* Tweak TODOs and remove an unwrap

* Update Bitcoin max input/output quantity

* Only read one piece of data from Monero

Due to output randomization, it's infeasible.

* Embed plan IDs into the TXs they create

We need to stop attempting signing if we've already signed a protocol. Ideally,
any one of the participating signers should be able to provide a proof the TX
was successfully signed. We can't just run a second signing protocol though as
a single malicious signer could complete the TX signature, and publish it,
yet not complete the secondary signature.

The TX itself has to be sufficient to show that the TX matches the plan. This
is done by embedding the ID, so matching addresses/amounts plans are
distinguished, and by allowing verification a TX actually matches a set of
addresses/amounts.

For Monero, this will need augmenting with the ephemeral keys (or usage of a
static seed for them).

* Don't use OP_RETURN to encode the plan ID on Bitcoin

We can use the inputs to distinguih identical-output plans without issue.

* Update OP_RETURN data access

It's not required to be the last output.

* Add Eventualities to Monero

An Eventuality is an effective equivalent to a SignableTransaction. That is
declared not by the inputs it spends, yet the outputs it creates.
Eventualities are also bound to a 32-byte RNG seed, enabling usage of a
hash-based identifier in a SignableTransaction, allowing multiple
SignableTransactions with the same output set to have different Eventualities.

In order to prevent triggering the burning bug, the RNG seed is hashed with
the planned-to-be-used inputs' output keys. While this does bind to them, it's
only loosely bound. The TX actually created may use different inputs entirely
if a forgery is crafted (which requires no brute forcing).

Binding to the key images would provide a strong binding, yet would require
knowing the key images, which requires active communication with the spend
key.

The purpose of this is so a multisig can identify if a Transaction the entire
group planned has been executed by a subset of the group or not. Once a plan
is created, it can have an Eventuality made. The Eventuality's extra is able
to be inserted into a HashMap, so all new on-chain transactions can be
trivially checked as potential candidates. Once a potential candidate is found,
a check involving ECC ops can be performed.

While this is arguably a DoS vector, the underlying Monero blockchain would
need to be spammed with transactions to trigger it. Accordingly, it becomes
a Monero blockchain DoS vector, when this code is written on the premise
of the Monero blockchain functioning. Accordingly, it is considered handled.

If a forgery does match, it must have created the exact same outputs the
multisig would've. Accordingly, it's argued the multisig shouldn't mind.

This entire suite of code is only necessary due to the lack of outgoing
view keys, yet it's able to avoid an interactive protocol to communicate
key images on every single received output.

While this could be locked to the multisig feature, there's no practical
benefit to doing so.

* Add support for encoding Monero address to instructions

* Move Serai's Monero address encoding into serai-client

serai-client is meant to be a single library enabling using Serai. While it was
originally written as an RPC client for Serai, apps actually using Serai will
primarily be sending transactions on connected networks. Sending those
transactions require proper {In, Out}Instructions, including proper address
encoding.

Not only has address encoding been moved, yet the subxt client is now behind
a feature. coin integrations have their own features, which are on by default.
primitives are always exposed.

* Reorganize file layout a bit, add feature flags to processor

* Tidy up ETH Dockerfile

* Add Bitcoin address encoding

* Move Bitcoin::Address to serai-client's

* Comment where tweaking needs to happen

* Add an API to check if a plan was completed in a specific TX

This allows any participating signer to submit the TX ID to prevent further
signing attempts.

Also performs some API cleanup.

* Minimize FROST dependencies

* Use a seeded RNG for key gen

* Tweak keys from Key gen

* Test proper usage of Branch/Change addresses

Adds a more descriptive error to an error case in decoys, and pads Monero
payments as needed.

* Also test spending the change output

* Add queued_plans to the Scheduler

queued_plans is for payments to be issued when an amount appears, yet the
amount is currently pre-fee. One the output is actually created, the
Scheduler should be notified of the amount it was created with, moving from
queued_plans to plans under the actual amount.

Also tightens debug_asserts to asserts for invariants which may are at risk of
being exclusive to prod.

* Add missing tweak_keys call

* Correct decoy selection height handling

* Add a few log statements to the scheduler

* Simplify test's get_block_number

* Simplify, while making more robust, branch address handling in Scheduler

* Have fees deducted from payments

Corrects Monero's handling of fees when there's no change address.

Adds a DUST variable, as needed due to 1_00_000_000 not being enough to pay
its fee on Monero.

* Add comment to Monero

* Consolidate BTC/XMR prepare_send code

These aren't fully consolidated. We'd need a SignableTransaction trait for
that. This is a lot cleaner though.

* Ban integrated addresses

The reasoning why is accordingly documented.

* Tidy TODOs/dust handling

* Update README TODO

* Use a determinisitic protocol version in Monero

* Test rebuilt KeyGen machines function as expected

* Use a more robust KeyGen entropy system

* Add DB TXNs

Also load entropy from env

* Add a loop for processing messages from substrate

Allows detecting if we're behind, and if so, waiting to handle the message

* Set Monero MAX_INPUTS properly

The previous number was based on an old hard fork. With the ring size having
increased, transactions have since got larger.

* Distinguish TODOs into TODO and TODO2s

TODO2s are for after protonet

* Zeroize secret share repr in ThresholdCore write

* Work on Eventualities

Adds serialization and stops signing when an eventuality is proven.

* Use a more robust DB key schema

* Update to {k, p}256 0.12

* cargo +nightly clippy

* cargo update

* Slight message-box tweaks

* Update to recent Monero merge

* Add a Coordinator trait for communication with coordinator

* Remove KeyGenHandle for just KeyGen

While KeyGen previously accepted instructions over a channel, this breaks the
ack flow needed for coordinator communication. Now, KeyGen is the direct object
with a handle() function for messages.

Thankfully, this ended up being rather trivial for KeyGen as it has no
background tasks.

* Add a handle function to Signer

Enables determining when it's finished handling a CoordinatorMessage and
therefore creating an acknowledgement.

* Save transactions used to complete eventualities

* Use a more intelligent sleep in the signer

* Emit SignedTransaction with the first ID *we can still get from our node*

* Move Substrate message handling into the new coordinator recv loop

* Add handle function to Scanner

* Remove the plans timer

Enables ensuring the ordring on the handling of plans.

* Remove the outputs function which panicked if a precondition wasn't met

The new API only returns outputs upon satisfaction of the precondition.

* Convert SignerOrder::SignTransaction to a function

* Remove the key_gen object from sign_plans

* Refactor out get_fee/prepare_send into dedicated functions

* Save plans being signed to the DB

* Reload transactions being signed on boot

* Stop reloading TXs being signed (and report it to peers)

* Remove message-box from the processor branch

We don't use it here yet.

* cargo +nightly fmt

* Move back common/zalloc

* Update subxt to 0.27

* Zeroize ^1.5, not 1

* Update GitHub workflow

* Remove usage of SignId in completed
2023-03-16 22:59:40 -04:00
Luke Parker
f374cd7398 Update to ethers 2 2023-03-16 20:16:57 -04:00
Luke Parker
ab1e5c372e Don't use a relative link to link to the audit 2023-03-16 19:49:36 -04:00
Luke Parker
67da08705e cargo update 2023-03-16 19:37:32 -04:00
Luke Parker
0d4b66dc2a Bump package versions 2023-03-16 19:29:22 -04:00
Luke Parker
4ed819fc7d Document crypto crates with audit notices 2023-03-16 19:25:01 -04:00
Luke Parker
74924095e1 Add Cypher Stack's audit of /crypto
The current /crypto folder, as of this commit, is identical, except for the
years in the copyright statements. GitHub's CI also passed for the previous
commit, ensuring the repo's integrity during that commit. This now establishes
trust for /crypto, which will be used to update documentation and create
releases.
2023-03-16 18:38:31 -04:00
Luke Parker
d2c1592c61 Resolve merging crypto-{audit, tweaks} and use the proper transcript in Bitcoin 2023-03-16 16:59:20 -04:00
Luke Parker
64924835ad Merge pull request #254 from serai-dex/nightly-2023-03
March 2023 - Rust Nightly Update
2023-03-16 16:45:19 -04:00
Luke Parker
37e4f2cc50 Merge pull request #255 from serai-dex/crypto-tweaks
Crypto audit/tweaks
2023-03-16 16:43:27 -04:00
Luke Parker
caf37527eb Merge branch 'develop' into crypto-tweaks 2023-03-16 16:43:04 -04:00
Luke Parker
669d2dbffc 3.10.2 Explicitly test RecommendedTranscript 2023-03-15 19:55:07 -04:00
Luke Parker
f4e2da2767 Move where we check the Monero node's protocol
The genesis block has a version of 1, so immediately checking (before new
blocks are added) will cause failures.
2023-03-14 01:54:08 -04:00
Luke Parker
48078d0b4b Remove Protocol::Unsupported 2023-03-13 08:03:13 -04:00
Luke Parker
0e0243639e Resolve clippy error
This was resolved on the processor branch yet not on develop.
2023-03-13 07:57:45 -04:00
Luke Parker
14203bbb46 Use an async Mutex for the Monero distribution
Enables safe async/thread-safe usage.
2023-03-12 04:13:43 -04:00
Luke Parker
f5fa6f020d Remove note about adding in a DB handle
It'd arguably be safer yet it isn't worth the API complexity.
2023-03-12 03:55:17 -04:00
Luke Parker
41a285ddfa Add a TX size check to Monero
This isn't perfect yet should ensure the eventual TX is less than 100k bytes.
2023-03-12 03:54:30 -04:00
Luke Parker
36034c2f72 Move ecdh derivation up to prevent Scalar::one() * ecdh 2023-03-11 10:51:40 -05:00
Luke Parker
5e62072a0f Fix #237 2023-03-11 10:31:58 -05:00
Luke Parker
e56495d624 Prefix arbitrary data with 127
Since we cannot expect/guarantee a payment ID will be included, the previous
position-based code for determining arbitrary data wasn't sufficient.
2023-03-11 05:47:25 -05:00
Luke Parker
71dbc798b5 Fix #251 2023-03-11 05:23:38 -05:00
Luke Parker
4335baa43f cargo update 2023-03-11 04:49:05 -05:00
akildemir
77de28f77a add monero seed support (#252)
* add monero seed support

* fix some of the pr comments

* remove languages module and unnecessary error returns

* Clean classic seed impl

Fixes a few issues regarding Zeroize usage/API safety. Mainly a cleanup.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-03-10 14:16:00 -05:00
Luke Parker
ad470bc969 \#242 Expand usage of black_box/zeroize
This commit greatly expands the usage of black_box/zeroize on bits, as it
originally should have. It is likely overkill, leading to less efficient
code generation, yet does its best to be comprehensive where comprehensiveness
is extremely annoying to achieve.

In the future, this usage of black_box may be desirable to move to its own
crate.

Credit to @AaronFeickert for identifying the original commit was incomplete.
2023-03-10 06:27:44 -05:00
Luke Parker
62dfc63532 Fix Ethereum, again 2023-03-07 06:25:21 -05:00
Luke Parker
1e201562df Correct doc comments re: HTML tags 2023-03-07 05:34:29 -05:00
Luke Parker
11114dcb74 Further fix the clippy lint controls for Hash on dalek_ff_group::*Point 2023-03-07 05:31:02 -05:00
Luke Parker
837c776297 Make Schnorr modular to its transcript 2023-03-07 05:30:21 -05:00
Luke Parker
6bff3866ea Correct Ethereum 2023-03-07 05:25:25 -05:00
Luke Parker
b0730e3fdf Fix last commit again 2023-03-07 04:47:06 -05:00
Luke Parker
2e78d61752 Fix last commit 2023-03-07 04:39:15 -05:00
Luke Parker
0b8a4ab3d0 Use a backwards compatible clippy lint for impl Hash 2023-03-07 04:26:19 -05:00
Luke Parker
c358090f16 Use black_box to help obscure the dalek-ff-group bool -> Choice conversion
I have no idea if this will actually help, yet it can't hurt.

Feature gated due to MSRV requirements.

Fixes #242.
2023-03-07 04:23:41 -05:00
Luke Parker
adb5f34fda Merge branch 'crypto-audit' into crypto-tweaks 2023-03-07 04:08:34 -05:00
Luke Parker
ed056cceaf 3.5.2 Test non-canonical from_repr
Unfortunately, G::from_bytes doesn't require canonicity so that still can't
be properly tested for. While we could try to detect SEC1, and write tests
on that, there's not a suitably stable/wide enough solution to be worth it.
2023-03-07 04:05:56 -05:00
Luke Parker
2bad06e5d9 Fix #200 2023-03-07 03:55:58 -05:00
Luke Parker
5a9a42f025 Use variable time for verifying PoKs in the DKG 2023-03-07 03:48:16 -05:00
Luke Parker
7d12c785b7 Correct error comment in ff-group-tests 2023-03-07 03:46:55 -05:00
Luke Parker
e08adcc1ac Have Ciphersuite re-export Group 2023-03-07 03:46:16 -05:00
Luke Parker
af5702fccd Make encryption public
It's necessary in order to read encryption messages over the network.
2023-03-07 03:37:30 -05:00
Luke Parker
5037962d3c Rename dkg serialize/deserialize to write/read 2023-03-07 03:37:25 -05:00
Luke Parker
5b26115f81 Add Debug implementations to dkg 2023-03-07 03:26:39 -05:00
Luke Parker
1a99629a4a Add feature-gated serde support for Participant/ThresholdParams
These don't have secret data yet sometimes have value to be communicated.
2023-03-07 03:13:55 -05:00
Luke Parker
b1ea2dfba6 Add support for hashing (as in HashMap) dalek points 2023-03-07 03:10:55 -05:00
Luke Parker
0e8c55e050 Update and remove unused dependencies 2023-03-07 03:06:46 -05:00
Luke Parker
d36fc026dd Remove unused generic in frost 2023-03-07 02:40:09 -05:00
Luke Parker
0bbf511062 Add 'static/Send/Sync to specific traits in crypto
These were proven necessary by our real world usage.
2023-03-07 02:38:47 -05:00
Luke Parker
2729882d65 Update to {k, p}256 0.12 2023-03-07 02:34:10 -05:00
Luke Parker
c37cc0b4e2 Update Zeroize pin to ^1.5 from 1.5 2023-03-07 02:29:59 -05:00
Luke Parker
a053454ae4 3.9.4 Add tests to the transcript crate 2023-03-07 02:25:10 -05:00
Luke Parker
20a33079f8 3.9.3 Document Merlin domain_separate conflict potential and add an asert 2023-03-06 20:16:57 -05:00
Luke Parker
8307d4f6c8 cargo fmt 2023-03-06 08:23:14 -05:00
Luke Parker
db1fefe7c1 Update tendermint/node to latest substrate 2023-03-06 08:20:01 -05:00
Luke Parker
4a81640ab8 Update runtime to latest substrate 2023-03-06 08:14:22 -05:00
Luke Parker
943438628d cargo update 2023-03-06 07:39:43 -05:00
Luke Parker
7efedb9a91 3.9.1 Also correct invalid doc comment 2023-03-06 07:16:04 -05:00
Luke Parker
79124b9a33 3.9.2 Better document rng_seed is allowed to conflict with challenge 2023-03-02 11:19:26 -05:00
Luke Parker
6fec95b1a7 3.7.2 Remove code randomizing which side odd elements end up on
This could still be gamed. For [1, 2, 3], the options were ([1], [2, 3]) or
([1, 2], [3]). This means 2 would always have the maximum round count, and
thus this is still game-able. There's no point to keeping its complexity
accordingly when the algorithm is as efficient as it is.

While a proper random could be used to satisfy 3.7.2, it'd break the
expected determinism.
2023-03-02 11:16:00 -05:00
Luke Parker
2f4f1de488 3.9.1 Fix SecureDigest trait bound 2023-03-02 10:57:22 -05:00
Luke Parker
97374a3e24 3.8.6 Correct transcript to scalar derivation
Replaces the externally passed in Digest with C::H since C is available.
2023-03-02 10:04:18 -05:00
Luke Parker
530671795a 3.8.5 Let the caller pass in a DST for the aggregation hash function
Also moves the aggregator over to Digest. While a bit verbose for this context,
as all appended items were fixed length, it's length prefixing is solid and
the API is pleasant. The downside is the additional dependency which is
in tree and quite compact.
2023-03-02 09:29:37 -05:00
Luke Parker
8b7e7b1a1c 3.8.4 Don't additionally transcript keys with challenges 2023-03-02 09:14:36 -05:00
Luke Parker
053f07a281 3.8.3 Document challenge requirements 2023-03-02 09:08:53 -05:00
Luke Parker
08f9287107 3.8.2 Add Ed25519 RFC 8032 test vectors 2023-03-02 09:06:03 -05:00
Luke Parker
35043d2889 3.8.1 Document RFC 8032 compatibility 2023-03-02 08:45:09 -05:00
Luke Parker
1d2ebdca62 3.7.6, 3.7.7 Optimize multiexp implementations 2023-03-02 06:12:02 -05:00
Luke Parker
e5329b42e6 3.7.5 Further document multiexp functions 2023-03-02 05:49:45 -05:00
Luke Parker
8144956f8a Further document get_unlocked_outputs 2023-03-02 05:31:22 -05:00
Luke Parker
408494f8de 3.7.4 Always zeroize multiexp data
This was handled as part of handling 3.7.1
2023-03-02 03:59:49 -05:00
Luke Parker
8661111fc6 3.7.3 Add multiexp tests 2023-03-02 03:58:48 -05:00
Luke Parker
93d5f41917 3.7.2 Randomize which side odd elements end up on during blame 2023-03-02 01:55:08 -05:00
Luke Parker
15d6be1678 3.7.1 Deduplicate flattening/zeroize code
While the prior intent was to avoid zeroizing for vartime verification, which
is assumed to not have any private data, this simplifies the code and promotes
safety.
2023-03-02 01:13:07 -05:00
Luke Parker
2fd5cd8161 3.6.9 Add several tests to the FROST library
Offset signing is now tested. Multi-nonce algorithms are now tested.
Multi-generator nonce algorithms are now tested. More fault cases are now tested
as well.
2023-03-01 08:02:45 -05:00
Luke Parker
c6284b85a4 3.6.8 Simplify offset splitting
This wasn't done prior to be 'leaderless', as now the participant with the
lowest ID has an extra step, yet this is still trivial. There's also notable
performance benefits to not taking the previous dividing approach, which
performed an exp.
2023-03-01 01:06:13 -05:00
Luke Parker
a42a84e1e8 3.6.7 Seal IetfTranscript 2023-03-01 00:42:01 -05:00
Luke Parker
5a3406bb5f 3.6.6 Further document nonces
This was already a largely documented file. While the terminology is
potentially ambiguous, there's not a clearer path perceived at this time.
2023-03-01 00:35:37 -05:00
Luke Parker
62b3036cbd 3.6.5 Document origin of vectors 2023-02-28 23:23:22 -05:00
Luke Parker
6a15b21949 3.6.4 Document inclusion of Ed448 HRAM vector 2023-02-28 23:14:59 -05:00
Luke Parker
39b3452da1 3.6.3 Check commitment encoding
Also extends tests of 3.6.2 and so on.
2023-02-28 21:57:18 -05:00
Luke Parker
7a05466049 3.6.2 Test nonce generation
There's two ways which this could be tested.

1) Preprocess not taking in an arbitrary RNG item, yet the relevant bytes

This would be an unsafe level of refactoring, in my opinion.

2) Test random_nonce and test the passed in RNG eventually ends up at
random_nonce.

This takes the latter route, both verifying random_nonce meets the vectors
and that the FROST machine calls random_nonce properly.
2023-02-28 21:02:12 -05:00
GitHub Actions
6b5097f0c3 Update nightly 2023-03-01 01:50:10 +00:00
Luke Parker
c1435a2045 3.4.a Panic if generators.len() != scalars.len() for MultiDLEqProof 2023-02-28 00:00:29 -05:00
Luke Parker
969a5d94f2 3.6.1 Document rejection of zero nonces 2023-02-24 06:16:22 -05:00
Luke Parker
93f7afec8b 3.5.2 Add more tests to ff-group-tests
The audit recommends checking failure cases for from_bytes,
from_bytes_unechecked, and from_repr. This isn't feasible.

from_bytes is allowed to have non-canonical values. [0xff; 32] may accordingly
be a valid point for non-SEC1-encoded curves.

from_bytes_unchecked doesn't have a defined failure mode, and by name,
unchecked, shouldn't necessarily fail. The audit acknowledges the tests should
test for whatever result is 'appropriate', yet any result which isn't a failure
on a valid element is appropriate.

from_repr must be canonical, yet for a binary field of 2^n where n % 8 == 0, a
[0xff; n / 8] repr would be valid.
2023-02-24 06:03:56 -05:00
Luke Parker
32c18cac84 3.5.1 Document presence of k256/p256 2023-02-24 05:28:31 -05:00
Luke Parker
65376e93e5 3.4.3 Merge the nonce calculation from DLEqProof and MultiDLEqProof into a
single function

3.4.3 actually describes getting rid of DLEqProof for a thin wrapper around
MultiDLEqProof. That can't be done due to DLEqProof not requiring the std
features, enabling Vecs, which MultiDLEqProof relies on.

Merging the verification statement does simplify the code a bit. While merging
the proof could also be, it has much less value due to the simplicity of
proving (nonce * G, scalar * G).
2023-02-24 05:11:01 -05:00
Luke Parker
6104d606be 3.4.2 Document the DLEq lib 2023-02-24 04:37:20 -05:00
Luke Parker
1a6497f37a 3.3.5 Clarify GeneratorPromotion is only for generators, not curves 2023-02-23 07:21:47 -05:00
Luke Parker
4d6a0bbd7d 3.3.4 Use FROST context throughout Encryption 2023-02-23 07:19:55 -05:00
Luke Parker
2d56d24d9c 3.3.3 (cont) Add a dedicated Participant type 2023-02-23 06:50:45 -05:00
Luke Parker
87dea5e455 3.3.3 Add an assert if polynomial is called with 0
This will only be called with 0 if the code fails to do proper screening of its
arguments. If such a flaw is present, the DKG lib is critically broken (as this
function isn't public). If it was allowed to continue executing, it'd reveal
the secret share.
2023-02-23 04:56:05 -05:00
Luke Parker
8bee62609c 3.3.2 Use a static IV and clarify cipher documentation 2023-02-23 04:44:20 -05:00
Luke Parker
d72c4ca4f7 3.3.1 replace try_from with from 2023-02-23 04:29:38 -05:00
Luke Parker
d929a8d96e 3.2.2 Use a hash to point for random points in dfg 2023-02-23 04:29:17 -05:00
Luke Parker
74647b1b52 3.2.3 Don't yield identity in Group::random 2023-02-23 04:14:07 -05:00
Luke Parker
40a6672547 3.2.1, 3.2.4, 3.2.5. Documentation and tests 2023-02-23 04:05:47 -05:00
Luke Parker
686a5ee364 3.1.4 Further document hash_to_F which may collide 2023-02-23 01:09:22 -05:00
Luke Parker
cb4ce5e354 3.1.3 Use a checked_add for the modulus in secp256k1/P-256 2023-02-23 00:57:41 -05:00
Luke Parker
ac0f5e9b2d 3.1.2 Remove oversize DST handling for code present in elliptic-curve already
Adds a test to ensure that elliptic-curve does in fact handle this properly.
2023-02-23 00:52:13 -05:00
Luke Parker
18ac80671f 3.1.1 Document secp256k1/P-256 hash_to_F 2023-02-23 00:37:19 -05:00
Luke Parker
8260ec1a9e Add another TODO 2023-02-15 20:44:53 -05:00
Luke Parker
07f424b484 cargo fmt 2023-02-15 20:39:22 -05:00
Luke Parker
5de8bf3295 Add additional checks/documentation to monero 2023-02-15 01:56:36 -05:00
Luke Parker
82a096e90e Scanner assert on is_torsion_free 2023-02-14 15:49:16 -05:00
github-actions[bot]
c540f52dda Update nightly (#248)
Co-authored-by: GitHub Actions <>
2023-02-01 22:46:53 -05:00
Luke Parker
264174644f Further workaround #247 2023-01-31 10:48:19 -05:00
Luke Parker
df75782e54 Re-license bitcoin-serai to MIT
It's pretty basic code, yet would still be quite pleasant to the larger
community.

Also adds documentation.
2023-01-31 09:45:25 -05:00
Luke Parker
86ad947261 Add missing semicolon 2023-01-31 08:15:00 -05:00
Luke Parker
a3267034b6 Bitcoin External/Branch/Change addresses
Adds support for offset inputs to the Bitcoin lib
2023-01-31 08:10:28 -05:00
VRx
c6bd00e778 Bitcoin processor (#232)
* serai Dockerfile & Makefile fixed

* added new bitcoin mod & bitcoinhram

* couple changes

* added odd&even check for bitcoin signing

* sign message updated

* print_keys commented out

* fixed signing process

* Added new bitcoin library & added most of bitcoin processor logic

* added new crate and refactored the bitcoin coin library

* added signing test function

* moved signature.rs

* publish set to false

* tests moved back to the root

* added new functions to rpc

* added utxo test

* added new rpc methods and refactored bitcoin processor

* added spendable output & fixed errors & added new logic for sighash & opened port 18443 for bitcoin docker

* changed tweak keys

* added tweak_keys & publish transaction and refactored bitcoin processor

* added new structs and fixed problems for testing purposes

* reverted dockerfile back its original

* reverted block generation of bitcoin to 5 seconds

* deleted unnecessary test function

* added new sighash & added new dbg messages & fixed couple errors

* fixed couple issue & removed unused functions

* fix for signing process

* crypto file for bitcoin refactored

* disabled test_send & removed some of the debug logs

* signing implemented & transaction weight calculation added & change address logic added

* refactored tweak_keys

* refactored mine_block & fixed change_address logic

* implemented new traits to bitcoin processor& refactored bitcoin processor

* added new line to tests file

* added new line to bitcoin's wallet.rs

* deleted Cargo.toml from coins folder

* edited bitcoin's Cargo.toml and added LICENSE

* added new line to bitcoin's Cargo.toml

* added spaces

* added spaces

* deleted unnecessary object

* added spaces

* deleted patch numbers

* updated sha256 parameter for message

* updated tag as const

* deleted unnecessary brackets and imports

* updated rpc.rs to 2 space indent

* deleted unnecessary brackers

* deleted unnecessary brackets

* changed it to explicit

* updated to explicit

* deleted unnecessary parsing

* added ? for easy return

* updated imports

* updated height to number

* deleted unnecessary brackets

* updated clsag to sig & to_vec to as_ref

* updated _sig to schnorr_signature

* deleted unnecessary variable

* updated Cargo.toml of processor and bitcoin

* updated imports of bitcoin processor

* updated MBlock to BBlock

* updated MSignable to BSignable

* updated imports

* deleted mask from Fee

* updated get_block function return

* updated comparison logic for scripts

* updated assert to debug_assert

* updated height to number

* updated txid logic

* updated tweak_keys definition

* updated imports

* deleted new line

* delete HashMap from monero

* deleted old test code parts

* updated test amount to a round number

* changed the test code part back to its original

* updated imports of rpc.rs

* deleted unnecessary return assignments

* deleted get_fee_per_byte

* deleted create_raw_transaction

* deleted fund_raw_transaction

* deleted sign transaction rpc

* delete verify_message rpc

* deleted get_balance

* deleted decode_raw_transaction rpc

* deleted list_transactions rpc

* changed test_send to p2wpkh

* updated imports of test_send

* fixed imports of test_send

* updated bitcoin's mine_block function

* updated bitcoin's test_send

* updated bitcoin's hram and test_signing

* deleted 2 rpc function (is_confirmed & get_transaction_block_number)

* deleted get_raw_transaction_hex

* deleted get_raw_transaction_info

* deleted new_address

* deleted test_mempool_accept

* updated remove(0) to remove(index)

* deleted ger_raw_transaction

* deleted RawTx trait and converted type to Transaction

* reverted raw_hex feature back

* added NotEnoughFunds to CoinError

* changed Sighash to all

* removed lifetime of RpcParams

* changed pub to pub(crate) & changed sig_hash line

* changed taproot_key_spend_signature_hash to internal

* added Clone to RpcError & deleted get_utxo_for

* changed to_hex to as_bytes for weight calculation

* updated SpendableOutput

* deleted unnecessary parentheses

* updated serialize of Output s id field

* deleted unused crate & added lazy_static

* updated RPC init function

* added lazy_static for TAG_HASH & updated imported crates

* changed get_block_index to get_block_number

* deleted get_block_info

* updated get_height to get_latest_block_number

* removed GetBlockWithDetailResult and get_block_with_transactions

* deleted unnecessary imports from rpc_helper

* removed lock and unlock_unspent

* deleted get_transactions and get_transaction and renamed get_raw_transaction to get_transaction

* updated opt_into_json

* changed payment_address and amount to output_script and amount for transcript

* refactored error logic for rpc & deleted anyhow crate

* added a dedicated file for json helper functions

* refactored imports and deleted unused code

* added clippy::non_snake_case

* removed unused Error items

* added new line to Cargo

* rekmoved Block and used bitcoin::Block direcetly

* removed added println and futures.len check

* removed HashMap from coin mod.rs

* updated Testnet to Regtest

* removed unnecessary variable

* updated as_str to &

* removed RawTx trait

* added newline

* changed test transaction to p2pkh

* updated test_send

* updated test_send

* updated test_send

* reformatted bitcoin processor

* moved sighash logic into signmachine

* removed generate_to_address

* added test_address function to bitcoin processor

* updated RpcResponse to enum and added Clone trait

* removed old RpcResponse

* updated shared_key to internal_key

* updated fee part

* updated test_send block logic

* added a test function for getting spendables

* updated tweaking keys logic

* updated calculate_weight logic

* added todo for BitcoinSchnorr Algorithm

* updated calculate_weight

* updated calculate_weight

* updated calculate_weight

* added a TODO for bitcoin's signing process

* removed unused code

* Finish merging develop

* cargo fmt

* cargo machete

* Handle most clippy lints on bitcoin

Doesn't handle the unused transcript due to pending cryptographic considerations.

* Rearrange imports and clippy tests

* Misc processor lint

* Update deny.toml

* Remove unnecessary RPC code

* updated test_send

* added bitcoin ci & updated test-dependencies yml

* fixed bitcoin ci

* updated bitcoin ci yml

* Remove mining from the bitcoin/monero docker files

The tests should control block production in order to test various
circumstances. The automatic mining disrupts assumptions made in testing. Since
we're now using the Bitcoin docker container for testing...

* Multiple fixes to the Bitcoin processor

Doesn't unwrap on RPC errors. Returns the expected connection error.

Fee calculation has a random - 1. This has been removed.

Supports the change address being an Option, as it is. This should not have
been blindly unwrapped.

* Remove unnecessary RPC code

* Further RPC simplifications

* Simplify Bitcoin action

It should not be mining.

* cargo fmt

* Finish RPC simplifications

* Run bitcoind as a daemon

* Remove the requirement on txindex

Saves tens of GB.

Also has attempt_send no longer return a list of outputs. That's incompatible
with this and only relevant to old scheduling designs.

* Remove number from Bitcoin SignableTransaction

Monero requires the current block number for decoy selection. Bitcoin doesn't
have a use.

* Ban coinbase transactions

These are burdened by maturity, so it's critically flawed to support them.

This causes the test_send function to fail as its working was premised on
a coinbase output. While it does make an actual output, it had insufficient
funds for the test's expectations due to regtest halving every 150 blocks.

In order to workaround this, the test will invalidate any existing chain,
offering a fresh start.

Also removes test_get_spendables and simplifies test_send.

* Various simplifications

Modifies SpendableOutput further to not require RPC calls at time of sign.

Removes the need to have get_transaction in the RPC.

* Clean prepare_send

* Update the Bitcoin TransactionMachine to output a Transaction

* Bitcoin TransactionMachine simplifications

* Update XOnly key handling

* Use a single sighash cache

* Move tweak_keys

* Remove unnecessary PSBT sets

* Restore removed newlines

* Other newlines

* Replace calculate_weight's custom math with a dummy TX serialize

* Move BTC TX construction code from processor to bitcoin

* Rename transactions.rs to wallet.rs

* Remove unused crate

* Note TODO

* Clean bitcoin signature test

* Make unit test out of BTC FROST signing test

* Final lint

* Remove usage of PartiallySignedTransaction

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-01-31 07:48:14 -05:00
Luke Parker
fba5b7fed4 Attempt workaround for #247 2023-01-31 07:07:27 -05:00
Luke Parker
affe4300e8 Replace clippy --tests with --all-targets 2023-01-30 07:30:02 -05:00
Luke Parker
b6f9a1f8b6 Rename file with dashes to having underscores 2023-01-30 04:27:39 -05:00
akildemir
9e01588b11 add test send to wallet-rpc with arb data (#246)
* add test send to wallet-rpc with arb data

* convert literals to const
2023-01-30 04:25:46 -05:00
Luke Parker
b0e0fc44cf Add secondary while loop to serai/client's test runner
There's a failing CI run on a node which booted, yet didn't create a genesis
yet. Apparently, the RPC is potentially accessible before the chain is in.

This attempts to resolve that.
2023-01-28 03:32:21 -05:00
Luke Parker
a4fdff3e3b Make progress on #235
I'm still not exactly sure where the trap handler in Monero for this is...
until then, this remains potentially fingerprintable.
2023-01-28 03:18:41 -05:00
Luke Parker
9241bdc3b5 Fix #183 2023-01-28 02:35:32 -05:00
Luke Parker
b253529413 cargo fmt 2023-01-28 01:59:42 -05:00
Luke Parker
2ace339975 Tokens pallet (#243)
* Use Monero-compatible additional TX keys

This still sends a fingerprinting flare up if you send to a subaddress which
needs to be fixed. Despite that, Monero no should no longer fail to scan TXs
from monero-serai regarding additional keys.

Previously it failed becuase we supplied one key as THE key, and n-1 as
additional. Monero expects n for additional.

This does correctly select when to use THE key versus when to use the additional
key when sending. That removes the ability for recipients to fingerprint
monero-serai by receiving to a standard address yet needing to use an additional
key.

* Add tokens_primitives

Moves OutInstruction from in-instructions.

Turns Destination into OutInstruction.

* Correct in-instructions DispatchClass

* Add initial tokens pallet

* Don't allow pallet addresses to equal identity

* Add support for InInstruction::transfer

Requires a cargo update due to modifications made to serai-dex/substrate.

Successfully mints a token to a SeraiAddress.

* Bind InInstructions to an amount

* Add a call filter to the runtime

Prevents worrying about calls to the assets pallet/generally tightens things
up.

* Restore Destination

It was meged into OutInstruction, yet it didn't make sense for OutInstruction
to contain a SeraiAddress.

Also deletes the excessively dated Scenarios doc.

* Split PublicKey/SeraiAddress

Lets us define a custom Display/ToString for SeraiAddress.

Also resolves an oddity where PublicKey would be encoded as String, not
[u8; 32].

* Test burning tokens/retrieving OutInstructions

Modularizes processor_coinUpdates into a shared testing utility.

* Misc lint

* Don't use PolkadotExtrinsicParams
2023-01-28 01:47:13 -05:00
akildemir
f12cc2cca6 Add more tests (#240)
* add wallet-rpc-compatibility tests

* fmt + clippy

* add wallet-rpc receive tests

* add (0,0) subaddress check to standard address
2023-01-24 15:22:07 -05:00
Luke Parker
19664967ed Use Monero-compatible additional TX keys
This still sends a fingerprinting flare up if you send to a subaddress which
needs to be fixed. Despite that, Monero no should no longer fail to scan TXs
from monero-serai regarding additional keys.

Previously it failed becuase we supplied one key as THE key, and n-1 as
additional. Monero expects n for additional.

This does correctly select when to use THE key versus when to use the additional
key when sending. That removes the ability for recipients to fingerprint
monero-serai by receiving to a standard address yet needing to use an additional
key.
2023-01-21 01:29:02 -05:00
Luke Parker
27f5881553 Ensure Amount uses checked ops 2023-01-20 11:04:21 -05:00
Luke Parker
8ca90e7905 Initial In Instructions pallet and Serai client lib (#233)
* Initial work on an In Inherents pallet

* Add an event for when a batch is executed

* Add a dummy provider for InInstructions

* Add in-instructions to the node

* Add the Serai runtime API to the processor

* Move processor tests around

* Build a subxt Client around Serai

* Successfully get Batch events from Serai

Renamed processor/substrate to processor/serai.

* Much more robust InInstruction pallet

* Implement the workaround from https://github.com/paritytech/subxt/issues/602

* Initial prototype of processor generated InInstructions

* Correct PendingCoins data flow for InInstructions

* Minor lint to in-instructions

* Remove the global Serai connection for a partial re-impl

* Correct ID handling of the processor test

* Workaround the delay in the subscription

* Make an unwrap an if let Some, remove old comments

* Lint the processor toml

* Rebase and update

* Move substrate/in-instructions to substrate/in-instructions/pallet

* Start an in-instructions primitives lib

* Properly update processor to subxt 0.24

Also corrects failures from the rebase.

* in-instructions cargo update

* Implement IsFatalError

* is_inherent -> true

* Rename in-instructions crates and misc cleanup

* Update documentation

* cargo update

* Misc update fixes

* Replace height with block_number

* Update processor src to latest subxt

* Correct pipeline for InInstructions testing

* Remove runtime::AccountId for serai_primitives::NativeAddress

* Rewrite the in-instructions pallet

Complete with respect to the currently written docs.

Drops the custom serializer for just using SCALE.

Makes slight tweaks as relevant.

* Move instructions' InherentDataProvider to a client crate

* Correct doc gen

* Add serde to in-instructions-primitives

* Add in-instructions-primitives to pallet

* Heights -> BlockNumbers

* Get batch pub test loop working

* Update in instructions pallet terminology

Removes the ambiguous Coin for Update.

Removes pending/artificial latency for furture client work.

Also moves to using serai_primitives::Coin.

* Add a BlockNumber primitive

* Belated cargo fmt

* Further document why DifferentBatch isn't fatal

* Correct processor sleeps

* Remove metadata at compile time, add test framework for Serai nodes

* Remove manual RPC client

* Simplify update test

* Improve re-exporting behavior of serai-runtime

It now re-exports all pallets underneath it.

* Add a function to get storage values to the Serai RPC

* Update substrate/ to latest substrate

* Create a dedicated crate for the Serai RPC

* Remove unused dependencies in substrate/

* Remove unused dependencies in coins/

Out of scope for this branch, just minor and path of least resistance.

* Use substrate/serai/client for the Serai RPC lib

It's a bit out of place, since these client folders are intended for the node to
access pallets and so on. This is for end-users to access Serai as a whole.

In that sense, it made more sense as a top level folder, yet that also felt
out of place.

* Move InInstructions test to serai-client for now

* Final cleanup

* Update deny.toml

* Cargo.lock update from merging develop

* Update nightly

Attempt to work around the current CI failure, which is a Rust ICE.

We previously didn't upgrade due to clippy 10134, yet that's been reverted.

* clippy

* clippy

* fmt

* NativeAddress -> SeraiAddress

* Sec fix on non-provided updates and doc fixes

* Add Serai as a Coin

Necessary in order to swap to Serai.

* Add a BlockHash type, used for batch IDs

* Remove origin from InInstruction

Makes InInstructionTarget. Adds RefundableInInstruction with origin.

* Document storage items in in-instructions

* Rename serai/client/tests/serai.rs to updates.rs

It only tested publishing updates and their successful acceptance.
2023-01-20 11:00:18 -05:00
Luke Parker
e13cf52c49 Update Cargo.lock
It appears to have changed with the recent Monero tests yet not have been
included in that PR.
2023-01-17 02:35:22 -05:00
Luke Parker
0346aa6964 Make serai-primitives and vs-primitives MIT 2023-01-17 02:26:32 -05:00
Luke Parker
7f8bb1aa9f Make validators archive nodes per #157 2023-01-17 02:17:45 -05:00
akildemir
3b920ad471 Monerolib Improvements (#224)
* convert AddressSpec subbaddress to tuple

* add wallet-rpc tests

* fix payment id decryption bug

* run fmt

* fix CI

* use monero-rs wallet-rpc for tests

* update the subaddress index type

* fix wallet-rpc CI

* fix monero-wallet-rpc CI actions

* pull latest monero for CI

* fix pr issues

* detach monero wallet  rpc

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-01-16 16:17:54 -05:00
Luke Parker
757adda2e0 cargo fmt 2023-01-16 16:16:55 -05:00
Luke Parker
a1ecdbf2f3 Update substrate/ to latest substrate 2023-01-16 15:53:01 -05:00
VRx
19ab49cb8c delete compromised key from dockerfile (#231) 2023-01-16 10:51:50 -05:00
Luke Parker
ced89332d2 cargo update
Necessary due to https://github.com/RustCrypto/signatures/issues/615.

Opportunity taken to update Substrate.
2023-01-16 10:42:31 -05:00
575 changed files with 99764 additions and 11857 deletions

2
.gitattributes vendored
View File

@@ -1,3 +1,5 @@
# Auto detect text files and perform LF normalization
* text=auto
* text eol=lf
*.pdf binary

47
.github/actions/bitcoin/action.yml vendored Normal file
View File

@@ -0,0 +1,47 @@
name: bitcoin-regtest
description: Spawns a regtest Bitcoin daemon
inputs:
version:
description: "Version to download and run"
required: false
default: 24.0.1
runs:
using: "composite"
steps:
- name: Bitcoin Daemon Cache
id: cache-bitcoind
uses: actions/cache@704facf57e6136b1bc63b828d79edcd491f0ee84
with:
path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Bitcoin Daemon
if: steps.cache-bitcoind.outputs.cache-hit != 'true'
shell: bash
run: |
RUNNER_OS=linux
RUNNER_ARCH=x86_64
FILE=bitcoin-${{ inputs.version }}-$RUNNER_ARCH-$RUNNER_OS-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-${{ inputs.version }}/$FILE
mv $FILE bitcoin.tar.gz
- name: Extract the Bitcoin Daemon
shell: bash
run: |
tar xzvf bitcoin.tar.gz
cd bitcoin-${{ inputs.version }}
sudo mv bin/* /bin && sudo mv lib/* /lib
- name: Bitcoin Regtest Daemon
shell: bash
run: |
RPC_USER=serai
RPC_PASS=seraidex
bitcoind -txindex -regtest \
-rpcuser=$RPC_USER -rpcpassword=$RPC_PASS \
-rpcbind=127.0.0.1 -rpcbind=$(hostname) -rpcallowip=0.0.0.0/0 \
-daemon

View File

@@ -7,44 +7,35 @@ inputs:
require: true
default:
rust-toolchain:
description: "Rust toolchain to install"
required: false
default: stable
rust-components:
description: "Rust components to install"
required: false
default:
runs:
using: "composite"
steps:
- name: Remove unused packages
shell: bash
run: |
sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
sudo apt clean
docker system prune -a --volumes
- name: Install apt dependencies
shell: bash
run: sudo apt install -y ca-certificates
- name: Install Protobuf
uses: arduino/setup-protoc@master
uses: arduino/setup-protoc@a8b67ba40b37d35169e222f3bb352603327985b6
with:
repo-token: ${{ inputs.github-token }}
- name: Install solc
shell: bash
run: |
pip3 install solc-select==0.2.1
solc-select install 0.8.16
solc-select use 0.8.16
cargo install svm-rs
svm install 0.8.16
svm use 0.8.16
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ inputs.rust-toolchain }}
components: ${{ inputs.rust-components }}
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install WASM toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ steps.nightly.outputs.version }}
targets: wasm32-unknown-unknown
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -0,0 +1,44 @@
name: monero-wallet-rpc
description: Spawns a Monero Wallet-RPC.
inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.2.0
runs:
using: "composite"
steps:
- name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc
uses: actions/cache@704facf57e6136b1bc63b828d79edcd491f0ee84
with:
path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Monero Wallet RPC
if: steps.cache-monero-wallet-rpc.outputs.cache-hit != 'true'
# Calculates OS/ARCH to demonstrate it, yet then locks to linux-x64 due
# to the contained folder not following the same naming scheme and
# requiring further expansion not worth doing right now
shell: bash
run: |
RUNNER_OS=${{ runner.os }}
RUNNER_ARCH=${{ runner.arch }}
RUNNER_OS=${RUNNER_OS,,}
RUNNER_ARCH=${RUNNER_ARCH,,}
RUNNER_OS=linux
RUNNER_ARCH=x64
FILE=monero-$RUNNER_OS-$RUNNER_ARCH-${{ inputs.version }}.tar.bz2
wget https://downloads.getmonero.org/cli/$FILE
tar -xvf $FILE
mv monero-x86_64-linux-gnu-${{ inputs.version }}/monero-wallet-rpc monero-wallet-rpc
- name: Monero Wallet RPC
shell: bash
run: ./monero-wallet-rpc --disable-rpc-login --rpc-bind-port 6061 --allow-mismatched-daemon-version --wallet-dir ./ --detach

View File

@@ -5,14 +5,14 @@ inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.0.0
default: v0.18.2.0
runs:
using: "composite"
steps:
- name: Monero Daemon Cache
id: cache-monerod
uses: actions/cache@v3
uses: actions/cache@704facf57e6136b1bc63b828d79edcd491f0ee84
with:
path: monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -10,7 +10,12 @@ inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.0.0
default: v0.18.2.0
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"
required: false
default: 24.0.1
runs:
using: "composite"
@@ -21,11 +26,20 @@ runs:
github-token: ${{ inputs.github-token }}
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@v1
uses: foundry-rs/foundry-toolchain@cb603ca0abb544f301eaed59ac0baf579aa6aecf
with:
version: nightly
version: nightly-09fe3e041369a816365a020f715ad6f94dbce9f2
cache: false
- name: Run a Monero Regtest Node
uses: ./.github/actions/monero
with:
version: ${{ inputs.monero-version }}
- name: Run a Bitcoin Regtest Node
uses: ./.github/actions/bitcoin
with:
version: ${{ inputs.bitcoin-version }}
- name: Run a Monero Wallet-RPC
uses: ./.github/actions/monero-wallet-rpc

View File

@@ -1 +1 @@
nightly-2022-12-01
nightly-2023-11-01

37
.github/workflows/coins-tests.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: coins/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
workflow_dispatch:
jobs:
test-coins:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \
-p ethereum-serai \
-p monero-generators \
-p monero-serai

33
.github/workflows/common-tests.yml vendored Normal file
View File

@@ -0,0 +1,33 @@
name: common/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
pull_request:
paths:
- "common/**"
workflow_dispatch:
jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p std-shims \
-p zalloc \
-p serai-db \
-p serai-env

44
.github/workflows/coordinator-tests.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: Coordinator Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "orchestration/message-queue/**"
- "coordinator/**"
- "orchestration/coordinator/**"
- "tests/docker/**"
- "tests/coordinator/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "orchestration/message-queue/**"
- "coordinator/**"
- "orchestration/coordinator/**"
- "tests/docker/**"
- "tests/coordinator/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Run coordinator Docker tests
run: cd tests/coordinator && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

42
.github/workflows/crypto-tests.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: crypto/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
workflow_dispatch:
jobs:
test-crypto:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p flexible-transcript \
-p ff-group-tests \
-p dalek-ff-group \
-p minimal-ed448 \
-p ciphersuite \
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p dkg \
-p modular-frost \
-p frost-schnorrkel

View File

@@ -9,17 +9,14 @@ jobs:
name: Run cargo deny
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@v3
uses: actions/cache@704facf57e6136b1bc63b828d79edcd491f0ee84
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo
uses: dtolnay/rust-toolchain@stable
- name: Install cargo deny
run: cargo install --locked cargo-deny

24
.github/workflows/full-stack-tests.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: Full Stack Tests
on:
push:
branches:
- develop
pull_request:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Run Full Stack Docker tests
run: cd tests/full-stack && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

69
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,69 @@
name: Lint
on:
push:
branches:
- develop
pull_request:
workflow_dispatch:
jobs:
clippy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
deny:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@704facf57e6136b1bc63b828d79edcd491f0ee84
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
- name: Run cargo deny
run: cargo deny -L error --all-features check
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -c rustfmt
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
dockerfiles:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify Dockerfiles are up to date
# Runs the file which generates them and checks the diff has no lines
run: cd orchestration && ./dockerfiles.sh && git diff | wc -l | grep -x "0"

View File

@@ -0,0 +1,38 @@
name: Message Queue Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "message-queue/**"
- "orchestration/message-queue/**"
- "tests/docker/**"
- "tests/message-queue/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "message-queue/**"
- "orchestration/message-queue/**"
- "tests/docker/**"
- "tests/message-queue/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Run message-queue Docker tests
run: cd tests/message-queue && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

28
.github/workflows/mini-tests.yml vendored Normal file
View File

@@ -0,0 +1,28 @@
name: mini/ Tests
on:
push:
branches:
- develop
paths:
- "mini/**"
pull_request:
paths:
- "mini/**"
workflow_dispatch:
jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p mini-serai

View File

@@ -6,17 +6,21 @@ on:
- develop
paths:
- "coins/monero/**"
- "processor/**"
pull_request:
paths:
- "coins/monero/**"
- "processor/**"
workflow_dispatch:
jobs:
# Only run these once since they will be consistent regardless of any node
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
@@ -24,7 +28,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run Unit Tests Without Features
run: cargo test --package monero-serai --lib
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
# Doesn't run unit tests with features as the tests workflow will
@@ -33,10 +37,10 @@ jobs:
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.0.0]
version: [v0.17.3.2, v0.18.2.0]
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
@@ -45,12 +49,11 @@ jobs:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
# Runs with the binaries feature so the binaries build
# https://github.com/rust-lang/cargo/issues/8396
run: cargo test --package monero-serai --test '*'
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --features binaries --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.0.0' }}
run: |
cargo test --package monero-serai --all-features --test '*'
cargo test --package serai-processor monero
if: ${{ matrix.version != 'v0.18.2.0' }}
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'

View File

@@ -9,7 +9,7 @@ jobs:
name: Update nightly
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
with:
submodules: "recursive"
@@ -28,7 +28,7 @@ jobs:
git push -u origin $(date +"nightly-%Y-%m")
- name: Pull Request
uses: actions/github-script@v6
uses: actions/github-script@d7906e4ad0b1822421a7e6a35d5ca353c962f410
with:
script: |
const { repo, owner } = context.repo;

37
.github/workflows/no-std.yml vendored Normal file
View File

@@ -0,0 +1,37 @@
name: no-std build
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "tests/no-std/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "tests/no-std/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Install RISC-V Toolchain
run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf
- name: Verify no-std builds
run: cd tests/no-std && CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf

44
.github/workflows/processor-tests.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: Processor Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "orchestration/message-queue/**"
- "processor/**"
- "orchestration/processor/**"
- "tests/docker/**"
- "tests/processor/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "orchestration/message-queue/**"
- "processor/**"
- "orchestration/processor/**"
- "tests/docker/**"
- "tests/processor/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Run processor Docker tests
run: cd tests/processor && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

View File

@@ -0,0 +1,38 @@
name: Reproducible Runtime
on:
push:
branches:
- develop
paths:
- "Cargo.lock"
- "common/**"
- "crypto/**"
- "substrate/**"
- "orchestration/runtime/**"
- "tests/reproducible-runtime/**"
pull_request:
paths:
- "Cargo.lock"
- "common/**"
- "crypto/**"
- "substrate/**"
- "orchestration/runtime/**"
- "tests/reproducible-runtime/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ inputs.github-token }}
- name: Run Reproducible Runtime tests
run: cd tests/reproducible-runtime && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

View File

@@ -4,77 +4,83 @@ on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
- "substrate/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
- "substrate/**"
workflow_dispatch:
jobs:
clippy:
test-infra:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Get nightly version to use
id: nightly
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
# Clippy requires nightly due to serai-runtime requiring it
rust-toolchain: ${{ steps.nightly.outputs.version }}
rust-components: clippy
- name: Run Clippy
run: cargo clippy --all-features --tests -- -D warnings -A dead_code
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-message-queue \
-p serai-processor-messages \
-p serai-processor \
-p tendermint-machine \
-p tributary-chain \
-p serai-coordinator \
-p serai-docker-tests
deny:
test-substrate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@v3
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo
uses: dtolnay/rust-toolchain@stable
- name: Install cargo deny
run: cargo install --locked cargo-deny
- name: Run cargo deny
run: cargo deny -L error --all-features check
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
with:
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run Tests
run: cargo test --all-features
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \
-p serai-coins-primitives \
-p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-pallet \
-p serai-runtime \
-p serai-node
fmt:
test-serai-client:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install rustfmt
uses: dtolnay/rust-toolchain@master
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
with:
toolchain: ${{ steps.nightly.outputs.version }}
components: rustfmt
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

1
.gitignore vendored
View File

@@ -1,2 +1,3 @@
target
.vscode
.test-logs

View File

@@ -1,3 +1,4 @@
edition = "2021"
tab_spaces = 2
max_width = 100

7646
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,11 @@
[workspace]
resolver = "2"
members = [
"common/std-shims",
"common/zalloc",
"common/db",
"common/env",
"common/request",
"crypto/transcript",
@@ -15,29 +20,54 @@ members = [
"crypto/dleq",
"crypto/dkg",
"crypto/frost",
"crypto/schnorrkel",
"coins/bitcoin",
"coins/ethereum",
"coins/monero/generators",
"coins/monero",
"message-queue",
"processor/messages",
"processor",
"substrate/serai/primitives",
"coordinator/tributary/tendermint",
"coordinator/tributary",
"coordinator",
"substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/tendermint/machine",
"substrate/tendermint/primitives",
"substrate/tendermint/client",
"substrate/tendermint/pallet",
"substrate/signals/pallet",
"substrate/runtime",
"substrate/node",
"substrate/client",
"mini",
"tests/no-std",
"tests/docker",
"tests/message-queue",
"tests/processor",
"tests/coordinator",
"tests/full-stack",
"tests/reproducible-runtime",
]
# Always compile Monero (and a variety of dependencies) with optimizations due
# to the unoptimized performance of Bulletproofs
# to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
@@ -55,3 +85,11 @@ monero-serai = { opt-level = 3 }
[profile.release]
panic = "unwind"
[patch.crates-io]
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# subxt *can* pull these off crates.io yet there's no benefit to this
sp-core-hashing = { git = "https://github.com/serai-dex/substrate" }
sp-std = { git = "https://github.com/serai-dex/substrate" }

View File

@@ -9,13 +9,15 @@ wallet.
### Layout
- `audits`: Audits for various parts of Serai.
- `docs`: Documentation on the Serai protocol.
- `common`: Crates containing utilities common to a variety of areas under
Serai, none neatly fitting under another category.
- `crypto`: A series of composable cryptographic libraries built around the
`ff`/`group` APIs achieving a variety of tasks. These range from generic
`ff`/`group` APIs, achieving a variety of tasks. These range from generic
infrastructure, to our IETF-compliant FROST implementation, to a DLEq proof as
needed for Bitcoin-Monero atomic swaps.
@@ -23,17 +25,39 @@ wallet.
wider community. This means they will always support the functionality Serai
needs, yet won't disadvantage other use cases when possible.
- `message-queue`: An ordered message server so services can talk to each other,
even when the other is offline.
- `processor`: A generic chain processor to process data for Serai and process
events from Serai, executing transactions as expected and needed.
- `coordinator`: A service to manage processors and communicate over a P2P
network with other validators.
- `substrate`: Substrate crates used to instantiate the Serai network.
- `deploy`: Scripts to deploy a Serai node/test environment.
- `orchestration`: Dockerfiles and scripts to deploy a Serai node/test
environment.
- `tests`: Tests for various crates. Generally, `crate/src/tests` is used, or
`crate/tests`, yet any tests requiring crates' binaries are placed here.
### Security
Serai hosts a bug bounty program via
[Immunefi](https://immunefi.com/bounty/serai/). For in-scope critical
vulnerabilities, we will reward whitehats with up to $30,000.
Anything not in-scope should still be submitted through Immunefi, with rewards
issued at the discretion of the Immunefi program managers.
### Links
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org):
https://matrix.to/#/#serai:matrix.org
- [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
- [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/
- [Telegram](https://t.me/SeraiDEX): https://t.me/SeraiDEX

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Cypher Stack
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,6 @@
# Cypher Stack /coins/bitcoin Audit, August 2023
This audit was over the /coins/bitcoin folder. It is encompassing up to commit
5121ca75199dff7bd34230880a1fdd793012068c.
Please see https://github.com/cypherstack/serai-btc-audit for provenance.

Binary file not shown.

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Cypher Stack
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,7 @@
# Cypher Stack /crypto Audit, March 2023
This audit was over the /crypto folder, excluding the ed448 crate, the `Ed448`
ciphersuite in the ciphersuite crate, and the `dleq/experimental` feature. It is
encompassing up to commit 669d2dbffc1dafb82a09d9419ea182667115df06.
Please see https://github.com/cypherstack/serai-audit for provenance.

61
coins/bitcoin/Cargo.toml Normal file
View File

@@ -0,0 +1,61 @@
[package]
name = "bitcoin-serai"
version = "0.3.0"
description = "A Bitcoin library for FROST-signing transactions"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/bitcoin"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Vrx <vrx00@proton.me>"]
edition = "2021"
rust-version = "1.74"
[dependencies]
std-shims = { version = "0.1.1", path = "../../common/std-shims", default-features = false }
thiserror = { version = "1", default-features = false, optional = true }
zeroize = { version = "^1.5", default-features = false }
rand_core = { version = "0.6", default-features = false }
bitcoin = { version = "0.31", default-features = false, features = ["no-std"] }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.8", default-features = false, features = ["secp256k1"], optional = true }
hex = { version = "0.4", default-features = false, optional = true }
serde = { version = "1", default-features = false, features = ["derive"], optional = true }
serde_json = { version = "1", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.1", default-features = false, features = ["tls", "basic-auth"], optional = true }
[dev-dependencies]
secp256k1 = { version = "0.28", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["tests"] }
tokio = { version = "1", features = ["macros"] }
[features]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"rand_core/std",
"bitcoin/std",
"bitcoin/serde",
"k256/std",
"transcript/std",
"frost",
"hex/std",
"serde/std",
"serde_json/std",
"simple-request",
]
hazmat = []
default = ["std"]

4
coins/bitcoin/README.md Normal file
View File

@@ -0,0 +1,4 @@
# bitcoin-serai
An application of [modular-frost](https://docs.rs/modular-frost) to Bitcoin
transactions, enabling extremely-efficient multisigs.

166
coins/bitcoin/src/crypto.rs Normal file
View File

@@ -0,0 +1,166 @@
use k256::{
elliptic_curve::sec1::{Tag, ToEncodedPoint},
ProjectivePoint,
};
use bitcoin::key::XOnlyPublicKey;
/// Get the x coordinate of a non-infinity, even point. Panics on invalid input.
pub fn x(key: &ProjectivePoint) -> [u8; 32] {
let encoded = key.to_encoded_point(true);
assert_eq!(encoded.tag(), Tag::CompressedEvenY, "x coordinate of odd key");
(*encoded.x().expect("point at infinity")).into()
}
/// Convert a non-infinity even point to a XOnlyPublicKey. Panics on invalid input.
pub fn x_only(key: &ProjectivePoint) -> XOnlyPublicKey {
XOnlyPublicKey::from_slice(&x(key)).expect("x_only was passed a point which was infinity or odd")
}
/// Make a point even by adding the generator until it is even.
///
/// Returns the even point and the amount of additions required.
#[cfg(any(feature = "std", feature = "hazmat"))]
pub fn make_even(mut key: ProjectivePoint) -> (ProjectivePoint, u64) {
let mut c = 0;
while key.to_encoded_point(true).tag() == Tag::CompressedOddY {
key += ProjectivePoint::GENERATOR;
c += 1;
}
(key, c)
}
#[cfg(feature = "std")]
mod frost_crypto {
use core::fmt::Debug;
use std_shims::{vec::Vec, io};
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use bitcoin::hashes::{HashEngine, Hash, sha256::Hash as Sha256};
use transcript::Transcript;
use k256::{elliptic_curve::ops::Reduce, U256, Scalar};
use frost::{
curve::{Ciphersuite, Secp256k1},
Participant, ThresholdKeys, ThresholdView, FrostError,
algorithm::{Hram as HramTrait, Algorithm, Schnorr as FrostSchnorr},
};
use super::*;
/// A BIP-340 compatible HRAm for use with the modular-frost Schnorr Algorithm.
///
/// If passed an odd nonce, it will have the generator added until it is even.
///
/// If the key is odd, this will panic.
#[derive(Clone, Copy, Debug)]
pub struct Hram;
#[allow(non_snake_case)]
impl HramTrait<Secp256k1> for Hram {
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
// Convert the nonce to be even
let (R, _) = make_even(*R);
const TAG_HASH: Sha256 = Sha256::const_hash(b"BIP0340/challenge");
let mut data = Sha256::engine();
data.input(TAG_HASH.as_ref());
data.input(TAG_HASH.as_ref());
data.input(&x(&R));
data.input(&x(A));
data.input(m);
Scalar::reduce(U256::from_be_slice(Sha256::from_engine(data).as_ref()))
}
}
/// BIP-340 Schnorr signature algorithm.
///
/// This must be used with a ThresholdKeys whose group key is even. If it is odd, this will panic.
#[derive(Clone)]
pub struct Schnorr<T: Sync + Clone + Debug + Transcript>(FrostSchnorr<Secp256k1, T, Hram>);
impl<T: Sync + Clone + Debug + Transcript> Schnorr<T> {
/// Construct a Schnorr algorithm continuing the specified transcript.
pub fn new(transcript: T) -> Schnorr<T> {
Schnorr(FrostSchnorr::new(transcript))
}
}
impl<T: Sync + Clone + Debug + Transcript> Algorithm<Secp256k1> for Schnorr<T> {
type Transcript = T;
type Addendum = ();
type Signature = [u8; 64];
fn transcript(&mut self) -> &mut Self::Transcript {
self.0.transcript()
}
fn nonces(&self) -> Vec<Vec<ProjectivePoint>> {
self.0.nonces()
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Secp256k1>,
) {
self.0.preprocess_addendum(rng, keys)
}
fn read_addendum<R: io::Read>(&self, reader: &mut R) -> io::Result<Self::Addendum> {
self.0.read_addendum(reader)
}
fn process_addendum(
&mut self,
view: &ThresholdView<Secp256k1>,
i: Participant,
addendum: (),
) -> Result<(), FrostError> {
self.0.process_addendum(view, i, addendum)
}
fn sign_share(
&mut self,
params: &ThresholdView<Secp256k1>,
nonce_sums: &[Vec<<Secp256k1 as Ciphersuite>::G>],
nonces: Vec<Zeroizing<<Secp256k1 as Ciphersuite>::F>>,
msg: &[u8],
) -> <Secp256k1 as Ciphersuite>::F {
self.0.sign_share(params, nonce_sums, nonces, msg)
}
#[must_use]
fn verify(
&self,
group_key: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
sum: Scalar,
) -> Option<Self::Signature> {
self.0.verify(group_key, nonces, sum).map(|mut sig| {
// Make the R of the final signature even
let offset;
(sig.R, offset) = make_even(sig.R);
// s = r + cx. Since we added to the r, add to s
sig.s += Scalar::from(offset);
// Convert to a Bitcoin signature by dropping the byte for the point's sign bit
sig.serialize()[1 ..].try_into().unwrap()
})
}
fn verify_share(
&self,
verification_share: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
share: Scalar,
) -> Result<Vec<(Scalar, ProjectivePoint)>, ()> {
self.0.verify_share(verification_share, nonces, share)
}
}
}
#[cfg(feature = "std")]
pub use frost_crypto::*;

24
coins/bitcoin/src/lib.rs Normal file
View File

@@ -0,0 +1,24 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
extern crate alloc;
/// The bitcoin Rust library.
pub use bitcoin;
/// Cryptographic helpers.
#[cfg(feature = "hazmat")]
pub mod crypto;
#[cfg(not(feature = "hazmat"))]
pub(crate) mod crypto;
/// Wallet functionality to create transactions.
pub mod wallet;
/// A minimal asynchronous Bitcoin RPC client.
#[cfg(feature = "std")]
pub mod rpc;
#[cfg(test)]
mod tests;

213
coins/bitcoin/src/rpc.rs Normal file
View File

@@ -0,0 +1,213 @@
use core::fmt::Debug;
use std::collections::HashSet;
use thiserror::Error;
use serde::{Deserialize, de::DeserializeOwned};
use serde_json::json;
use simple_request::{hyper, Request, Client};
use bitcoin::{
hashes::{Hash, hex::FromHex},
consensus::encode,
Txid, Transaction, BlockHash, Block,
};
#[derive(Clone, PartialEq, Eq, Debug, Deserialize)]
pub struct Error {
code: isize,
message: String,
}
#[derive(Clone, Debug, Deserialize)]
#[serde(untagged)]
enum RpcResponse<T> {
Ok { result: T },
Err { error: Error },
}
/// A minimal asynchronous Bitcoin RPC client.
#[derive(Clone, Debug)]
pub struct Rpc {
client: Client,
url: String,
}
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum RpcError {
#[error("couldn't connect to node")]
ConnectionError,
#[error("request had an error: {0:?}")]
RequestError(Error),
#[error("node replied with invalid JSON")]
InvalidJson(serde_json::error::Category),
#[error("node sent an invalid response ({0})")]
InvalidResponse(&'static str),
#[error("node was missing expected methods")]
MissingMethods(HashSet<&'static str>),
}
impl Rpc {
/// Create a new connection to a Bitcoin RPC.
///
/// An RPC call is performed to ensure the node is reachable (and that an invalid URL wasn't
/// provided).
///
/// Additionally, a set of expected methods is checked to be offered by the Bitcoin RPC. If these
/// methods aren't provided, an error with the missing methods is returned. This ensures all RPC
/// routes explicitly provided by this library are at least possible.
///
/// Each individual RPC route may still fail at time-of-call, regardless of the arguments
/// provided to this library, if the RPC has an incompatible argument layout. That is not checked
/// at time of RPC creation.
pub async fn new(url: String) -> Result<Rpc, RpcError> {
let rpc = Rpc { client: Client::with_connection_pool(), url };
// Make an RPC request to verify the node is reachable and sane
let res: String = rpc.rpc_call("help", json!([])).await?;
// Verify all methods we expect are present
// If we had a more expanded RPC, due to differences in RPC versions, it wouldn't make sense to
// error if all methods weren't present
// We only provide a very minimal set of methods which have been largely consistent, hence why
// this is sane
let mut expected_methods = HashSet::from([
"help",
"getblockcount",
"getblockhash",
"getblockheader",
"getblock",
"sendrawtransaction",
"getrawtransaction",
]);
for line in res.split('\n') {
// This doesn't check if the arguments are as expected
// This is due to Bitcoin supporting a large amount of optional arguments, which
// occassionally change, with their own mechanism of text documentation, making matching off
// it a quite involved task
// Instead, once we've confirmed the methods are present, we assume our arguments are aligned
// Else we'll error at time of call
if expected_methods.remove(line.split(' ').next().unwrap_or("")) &&
expected_methods.is_empty()
{
break;
}
}
if !expected_methods.is_empty() {
Err(RpcError::MissingMethods(expected_methods))?;
};
Ok(rpc)
}
/// Perform an arbitrary RPC call.
pub async fn rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: serde_json::Value,
) -> Result<Response, RpcError> {
let mut request = Request::from(
hyper::Request::post(&self.url)
.header("Content-Type", "application/json")
.body(
serde_json::to_vec(&json!({ "jsonrpc": "2.0", "method": method, "params": params }))
.unwrap()
.into(),
)
.unwrap(),
);
request.with_basic_auth();
let mut res = self
.client
.request(request)
.await
.map_err(|_| RpcError::ConnectionError)?
.body()
.await
.map_err(|_| RpcError::ConnectionError)?;
let res: RpcResponse<Response> =
serde_json::from_reader(&mut res).map_err(|e| RpcError::InvalidJson(e.classify()))?;
match res {
RpcResponse::Ok { result } => Ok(result),
RpcResponse::Err { error } => Err(RpcError::RequestError(error)),
}
}
/// Get the latest block's number.
///
/// The genesis block's 'number' is zero. They increment from there.
pub async fn get_latest_block_number(&self) -> Result<usize, RpcError> {
// getblockcount doesn't return the amount of blocks on the current chain, yet the "height"
// of the current chain. The "height" of the current chain is defined as the "height" of the
// tip block of the current chain. The "height" of a block is defined as the amount of blocks
// present when the block was created. Accordingly, the genesis block has height 0, and
// getblockcount will return 0 when it's only the only block, despite their being one block.
self.rpc_call("getblockcount", json!([])).await
}
/// Get the hash of a block by the block's number.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
let mut hash = *self
.rpc_call::<BlockHash>("getblockhash", json!([number]))
.await?
.as_raw_hash()
.as_byte_array();
// bitcoin stores the inner bytes in reverse order.
hash.reverse();
Ok(hash)
}
/// Get a block's number by its hash.
pub async fn get_block_number(&self, hash: &[u8; 32]) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct Number {
height: usize,
}
Ok(self.rpc_call::<Number>("getblockheader", json!([hex::encode(hash)])).await?.height)
}
/// Get a block by its hash.
pub async fn get_block(&self, hash: &[u8; 32]) -> Result<Block, RpcError> {
let hex = self.rpc_call::<String>("getblock", json!([hex::encode(hash), 0])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the block"))?;
let block: Block = encode::deserialize(&bytes)
.map_err(|_| RpcError::InvalidResponse("node sent an improperly serialized block"))?;
let mut block_hash = *block.block_hash().as_raw_hash().as_byte_array();
block_hash.reverse();
if hash != &block_hash {
Err(RpcError::InvalidResponse("node replied with a different block"))?;
}
Ok(block)
}
/// Publish a transaction.
pub async fn send_raw_transaction(&self, tx: &Transaction) -> Result<Txid, RpcError> {
let txid = self.rpc_call("sendrawtransaction", json!([encode::serialize_hex(tx)])).await?;
if txid != tx.txid() {
Err(RpcError::InvalidResponse("returned TX ID inequals calculated TX ID"))?;
}
Ok(txid)
}
/// Get a transaction by its hash.
pub async fn get_transaction(&self, hash: &[u8; 32]) -> Result<Transaction, RpcError> {
let hex = self.rpc_call::<String>("getrawtransaction", json!([hex::encode(hash)])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the transaction"))?;
let tx: Transaction = encode::deserialize(&bytes)
.map_err(|_| RpcError::InvalidResponse("node sent an improperly serialized transaction"))?;
let mut tx_hash = *tx.txid().as_raw_hash().as_byte_array();
tx_hash.reverse();
if hash != &tx_hash {
Err(RpcError::InvalidResponse("node replied with a different transaction"))?;
}
Ok(tx)
}
}

View File

@@ -0,0 +1,46 @@
use rand_core::OsRng;
use secp256k1::{Secp256k1 as BContext, Message, schnorr::Signature};
use k256::Scalar;
use transcript::{Transcript, RecommendedTranscript};
use frost::{
curve::Secp256k1,
Participant,
tests::{algorithm_machines, key_gen, sign},
};
use crate::{
bitcoin::hashes::{Hash as HashTrait, sha256::Hash},
crypto::{x_only, make_even, Schnorr},
};
#[test]
fn test_algorithm() {
let mut keys = key_gen::<_, Secp256k1>(&mut OsRng);
const MESSAGE: &[u8] = b"Hello, World!";
for (_, keys) in keys.iter_mut() {
let (_, offset) = make_even(keys.group_key());
*keys = keys.offset(Scalar::from(offset));
}
let algo =
Schnorr::<RecommendedTranscript>::new(RecommendedTranscript::new(b"bitcoin-serai sign test"));
let sig = sign(
&mut OsRng,
algo.clone(),
keys.clone(),
algorithm_machines(&mut OsRng, algo, &keys),
Hash::hash(MESSAGE).as_ref(),
);
BContext::new()
.verify_schnorr(
&Signature::from_slice(&sig)
.expect("couldn't convert produced signature to secp256k1::Signature"),
&Message::from(Hash::hash(MESSAGE)),
&x_only(&keys[&Participant::new(1).unwrap()].group_key()),
)
.unwrap()
}

View File

@@ -0,0 +1 @@
mod crypto;

View File

@@ -0,0 +1,188 @@
use std_shims::{
vec::Vec,
collections::HashMap,
io::{self, Write},
};
#[cfg(feature = "std")]
use std_shims::io::Read;
use k256::{
elliptic_curve::sec1::{Tag, ToEncodedPoint},
Scalar, ProjectivePoint,
};
#[cfg(feature = "std")]
use frost::{
curve::{Ciphersuite, Secp256k1},
ThresholdKeys,
};
use bitcoin::{
consensus::encode::serialize, key::TweakedPublicKey, address::Payload, OutPoint, ScriptBuf,
TxOut, Transaction, Block,
};
#[cfg(feature = "std")]
use bitcoin::consensus::encode::Decodable;
use crate::crypto::x_only;
#[cfg(feature = "std")]
use crate::crypto::make_even;
#[cfg(feature = "std")]
mod send;
#[cfg(feature = "std")]
pub use send::*;
/// Tweak keys to ensure they're usable with Bitcoin.
///
/// Taproot keys, which these keys are used as, must be even. This offsets the keys until they're
/// even.
#[cfg(feature = "std")]
pub fn tweak_keys(keys: &ThresholdKeys<Secp256k1>) -> ThresholdKeys<Secp256k1> {
let (_, offset) = make_even(keys.group_key());
keys.offset(Scalar::from(offset))
}
/// Return the Taproot address payload for a public key.
///
/// If the key is odd, this will return None.
pub fn address_payload(key: ProjectivePoint) -> Option<Payload> {
if key.to_encoded_point(true).tag() != Tag::CompressedEvenY {
return None;
}
Some(Payload::p2tr_tweaked(TweakedPublicKey::dangerous_assume_tweaked(x_only(&key))))
}
/// A spendable output.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct ReceivedOutput {
// The scalar offset to obtain the key usable to spend this output.
offset: Scalar,
// The output to spend.
output: TxOut,
// The TX ID and vout of the output to spend.
outpoint: OutPoint,
}
impl ReceivedOutput {
/// The offset for this output.
pub fn offset(&self) -> Scalar {
self.offset
}
/// The Bitcoin output for this output.
pub fn output(&self) -> &TxOut {
&self.output
}
/// The outpoint for this output.
pub fn outpoint(&self) -> &OutPoint {
&self.outpoint
}
/// The value of this output.
pub fn value(&self) -> u64 {
self.output.value.to_sat()
}
/// Read a ReceivedOutput from a generic satisfying Read.
#[cfg(feature = "std")]
pub fn read<R: Read>(r: &mut R) -> io::Result<ReceivedOutput> {
Ok(ReceivedOutput {
offset: Secp256k1::read_F(r)?,
output: TxOut::consensus_decode(r).map_err(|_| io::Error::other("invalid TxOut"))?,
outpoint: OutPoint::consensus_decode(r).map_err(|_| io::Error::other("invalid OutPoint"))?,
})
}
/// Write a ReceivedOutput to a generic satisfying Write.
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.offset.to_bytes())?;
w.write_all(&serialize(&self.output))?;
w.write_all(&serialize(&self.outpoint))
}
/// Serialize a ReceivedOutput to a `Vec<u8>`.
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::new();
self.write(&mut res).unwrap();
res
}
}
/// A transaction scanner capable of being used with HDKD schemes.
#[derive(Clone, Debug)]
pub struct Scanner {
key: ProjectivePoint,
scripts: HashMap<ScriptBuf, Scalar>,
}
impl Scanner {
/// Construct a Scanner for a key.
///
/// Returns None if this key can't be scanned for.
pub fn new(key: ProjectivePoint) -> Option<Scanner> {
let mut scripts = HashMap::new();
scripts.insert(address_payload(key)?.script_pubkey(), Scalar::ZERO);
Some(Scanner { key, scripts })
}
/// Register an offset to scan for.
///
/// Due to Bitcoin's requirement that points are even, not every offset may be used.
/// If an offset isn't usable, it will be incremented until it is. If this offset is already
/// present, None is returned. Else, Some(offset) will be, with the used offset.
///
/// This means offsets are surjective, not bijective, and the order offsets are registered in
/// may determine the validity of future offsets.
pub fn register_offset(&mut self, mut offset: Scalar) -> Option<Scalar> {
// This loop will terminate as soon as an even point is found, with any point having a ~50%
// chance of being even
// That means this should terminate within a very small amount of iterations
loop {
match address_payload(self.key + (ProjectivePoint::GENERATOR * offset)) {
Some(address) => {
let script = address.script_pubkey();
if self.scripts.contains_key(&script) {
None?;
}
self.scripts.insert(script, offset);
return Some(offset);
}
None => offset += Scalar::ONE,
}
}
}
/// Scan a transaction.
pub fn scan_transaction(&self, tx: &Transaction) -> Vec<ReceivedOutput> {
let mut res = Vec::new();
for (vout, output) in tx.output.iter().enumerate() {
// If the vout index exceeds 2**32, stop scanning outputs
let Ok(vout) = u32::try_from(vout) else { break };
if let Some(offset) = self.scripts.get(&output.script_pubkey) {
res.push(ReceivedOutput {
offset: *offset,
output: output.clone(),
outpoint: OutPoint::new(tx.txid(), vout),
});
}
}
res
}
/// Scan a block.
///
/// This will also scan the coinbase transaction which is bound by maturity. If received outputs
/// must be immediately spendable, a post-processing pass is needed to remove those outputs.
/// Alternatively, scan_transaction can be called on `block.txdata[1 ..]`.
pub fn scan_block(&self, block: &Block) -> Vec<ReceivedOutput> {
let mut res = Vec::new();
for tx in &block.txdata {
res.extend(self.scan_transaction(tx));
}
res
}
}

View File

@@ -0,0 +1,438 @@
use std_shims::{
io::{self, Read},
collections::HashMap,
};
use thiserror::Error;
use rand_core::{RngCore, CryptoRng};
use transcript::{Transcript, RecommendedTranscript};
use k256::{elliptic_curve::sec1::ToEncodedPoint, Scalar};
use frost::{curve::Secp256k1, Participant, ThresholdKeys, FrostError, sign::*};
use bitcoin::{
sighash::{TapSighashType, SighashCache, Prevouts},
absolute::LockTime,
script::{PushBytesBuf, ScriptBuf},
transaction::{Version, Transaction},
OutPoint, Sequence, Witness, TxIn, Amount, TxOut, Address,
};
use crate::{
crypto::Schnorr,
wallet::{ReceivedOutput, address_payload},
};
#[rustfmt::skip]
// https://github.com/bitcoin/bitcoin/blob/306ccd4927a2efe325c8d84be1bdb79edeb29b04/src/policy/policy.cpp#L26-L63
// As the above notes, a lower amount may not be considered dust if contained in a SegWit output
// This doesn't bother with delineation due to how marginal these values are, and because it isn't
// worth the complexity to implement differentation
pub const DUST: u64 = 546;
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum TransactionError {
#[error("no inputs were specified")]
NoInputs,
#[error("no outputs were created")]
NoOutputs,
#[error("a specified payment's amount was less than bitcoin's required minimum")]
DustPayment,
#[error("too much data was specified")]
TooMuchData,
#[error("fee was too low to pass the default minimum fee rate")]
TooLowFee,
#[error("not enough funds for these payments")]
NotEnoughFunds,
#[error("transaction was too large")]
TooLargeTransaction,
}
/// A signable transaction, clone-able across attempts.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct SignableTransaction {
tx: Transaction,
offsets: Vec<Scalar>,
prevouts: Vec<TxOut>,
needed_fee: u64,
}
impl SignableTransaction {
fn calculate_weight(inputs: usize, payments: &[(Address, u64)], change: Option<&Address>) -> u64 {
// Expand this a full transaction in order to use the bitcoin library's weight function
let mut tx = Transaction {
version: Version(2),
lock_time: LockTime::ZERO,
input: vec![
TxIn {
// This is a fixed size
// See https://developer.bitcoin.org/reference/transactions.html#raw-transaction-format
previous_output: OutPoint::default(),
// This is empty for a Taproot spend
script_sig: ScriptBuf::new(),
// This is fixed size, yet we do use Sequence::MAX
sequence: Sequence::MAX,
// Our witnesses contains a single 64-byte signature
witness: Witness::from_slice(&[vec![0; 64]])
};
inputs
],
output: payments
.iter()
// The payment is a fixed size so we don't have to use it here
// The script pub key is not of a fixed size and does have to be used here
.map(|payment| TxOut {
value: Amount::from_sat(payment.1),
script_pubkey: payment.0.script_pubkey(),
})
.collect(),
};
if let Some(change) = change {
// Use a 0 value since we're currently unsure what the change amount will be, and since
// the value is fixed size (so any value could be used here)
tx.output.push(TxOut { value: Amount::ZERO, script_pubkey: change.script_pubkey() });
}
u64::from(tx.weight())
}
/// Returns the fee necessary for this transaction to achieve the fee rate specified at
/// construction.
///
/// The actual fee this transaction will use is `sum(inputs) - sum(outputs)`.
pub fn needed_fee(&self) -> u64 {
self.needed_fee
}
/// Returns the fee this transaction will use.
pub fn fee(&self) -> u64 {
self.prevouts.iter().map(|prevout| prevout.value.to_sat()).sum::<u64>() -
self.tx.output.iter().map(|prevout| prevout.value.to_sat()).sum::<u64>()
}
/// Create a new SignableTransaction.
///
/// If a change address is specified, any leftover funds will be sent to it if the leftover funds
/// exceed the minimum output amount. If a change address isn't specified, all leftover funds
/// will become part of the paid fee.
///
/// If data is specified, an OP_RETURN output will be added with it.
pub fn new(
mut inputs: Vec<ReceivedOutput>,
payments: &[(Address, u64)],
change: Option<Address>,
data: Option<Vec<u8>>,
fee_per_weight: u64,
) -> Result<SignableTransaction, TransactionError> {
if inputs.is_empty() {
Err(TransactionError::NoInputs)?;
}
if payments.is_empty() && change.is_none() && data.is_none() {
Err(TransactionError::NoOutputs)?;
}
for (_, amount) in payments {
if *amount < DUST {
Err(TransactionError::DustPayment)?;
}
}
if data.as_ref().map(|data| data.len()).unwrap_or(0) > 80 {
Err(TransactionError::TooMuchData)?;
}
let input_sat = inputs.iter().map(|input| input.output.value.to_sat()).sum::<u64>();
let offsets = inputs.iter().map(|input| input.offset).collect();
let tx_ins = inputs
.iter()
.map(|input| TxIn {
previous_output: input.outpoint,
script_sig: ScriptBuf::new(),
sequence: Sequence::MAX,
witness: Witness::new(),
})
.collect::<Vec<_>>();
let payment_sat = payments.iter().map(|payment| payment.1).sum::<u64>();
let mut tx_outs = payments
.iter()
.map(|payment| TxOut {
value: Amount::from_sat(payment.1),
script_pubkey: payment.0.script_pubkey(),
})
.collect::<Vec<_>>();
// Add the OP_RETURN output
if let Some(data) = data {
tx_outs.push(TxOut {
value: Amount::ZERO,
script_pubkey: ScriptBuf::new_op_return(
PushBytesBuf::try_from(data)
.expect("data didn't fit into PushBytes depsite being checked"),
),
})
}
let mut weight = Self::calculate_weight(tx_ins.len(), payments, None);
let mut needed_fee = fee_per_weight * weight;
// "Virtual transaction size" is weight ceildiv 4 per
// https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki
// https://github.com/bitcoin/bitcoin/blob/306ccd4927a2efe325c8d84be1bdb79edeb29b04/
// src/policy/policy.cpp#L295-L298
// implements this as expected
// Technically, it takes whatever's greater, the weight or the amount of signature operatons
// multiplied by DEFAULT_BYTES_PER_SIGOP (20)
// We only use 1 signature per input, and our inputs have a weight exceeding 20
// Accordingly, our inputs' weight will always be greater than the cost of the signature ops
let vsize = weight.div_ceil(4);
debug_assert_eq!(
u64::try_from(bitcoin::policy::get_virtual_tx_size(
weight.try_into().unwrap(),
tx_ins.len().try_into().unwrap()
))
.unwrap(),
vsize
);
// Technically, if there isn't change, this TX may still pay enough of a fee to pass the
// minimum fee. Such edge cases aren't worth programming when they go against intent, as the
// specified fee rate is too low to be valid
// bitcoin::policy::DEFAULT_MIN_RELAY_TX_FEE is in sats/kilo-vbyte
if needed_fee < ((u64::from(bitcoin::policy::DEFAULT_MIN_RELAY_TX_FEE) * vsize) / 1000) {
Err(TransactionError::TooLowFee)?;
}
if input_sat < (payment_sat + needed_fee) {
Err(TransactionError::NotEnoughFunds)?;
}
// If there's a change address, check if there's change to give it
if let Some(change) = change.as_ref() {
let weight_with_change = Self::calculate_weight(tx_ins.len(), payments, Some(change));
let fee_with_change = fee_per_weight * weight_with_change;
if let Some(value) = input_sat.checked_sub(payment_sat + fee_with_change) {
if value >= DUST {
tx_outs
.push(TxOut { value: Amount::from_sat(value), script_pubkey: change.script_pubkey() });
weight = weight_with_change;
needed_fee = fee_with_change;
}
}
}
if tx_outs.is_empty() {
Err(TransactionError::NoOutputs)?;
}
if weight > u64::from(bitcoin::policy::MAX_STANDARD_TX_WEIGHT) {
Err(TransactionError::TooLargeTransaction)?;
}
Ok(SignableTransaction {
tx: Transaction {
version: Version(2),
lock_time: LockTime::ZERO,
input: tx_ins,
output: tx_outs,
},
offsets,
prevouts: inputs.drain(..).map(|input| input.output).collect(),
needed_fee,
})
}
/// Returns the outputs this transaction will create.
pub fn outputs(&self) -> &[TxOut] {
&self.tx.output
}
/// Create a multisig machine for this transaction.
///
/// Returns None if the wrong keys are used.
pub fn multisig(
self,
keys: ThresholdKeys<Secp256k1>,
mut transcript: RecommendedTranscript,
) -> Option<TransactionMachine> {
transcript.domain_separate(b"bitcoin_transaction");
transcript.append_message(b"root_key", keys.group_key().to_encoded_point(true).as_bytes());
// Transcript the inputs and outputs
let tx = &self.tx;
for input in &tx.input {
transcript.append_message(b"input_hash", input.previous_output.txid);
transcript.append_message(b"input_output_index", input.previous_output.vout.to_le_bytes());
}
for payment in &tx.output {
transcript.append_message(b"output_script", payment.script_pubkey.as_bytes());
transcript.append_message(b"output_amount", payment.value.to_sat().to_le_bytes());
}
let mut sigs = vec![];
for i in 0 .. tx.input.len() {
let mut transcript = transcript.clone();
// This unwrap is safe since any transaction with this many inputs violates the maximum
// size allowed under standards, which this lib will error on creation of
transcript.append_message(b"signing_input", u32::try_from(i).unwrap().to_le_bytes());
let offset = keys.clone().offset(self.offsets[i]);
if address_payload(offset.group_key())?.script_pubkey() != self.prevouts[i].script_pubkey {
None?;
}
sigs.push(AlgorithmMachine::new(
Schnorr::new(transcript),
keys.clone().offset(self.offsets[i]),
));
}
Some(TransactionMachine { tx: self, sigs })
}
}
/// A FROST signing machine to produce a Bitcoin transaction.
///
/// This does not support caching its preprocess. When sign is called, the message must be empty.
/// This will panic if either `cache` is called or the message isn't empty.
pub struct TransactionMachine {
tx: SignableTransaction,
sigs: Vec<AlgorithmMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl PreprocessMachine for TransactionMachine {
type Preprocess = Vec<Preprocess<Secp256k1, ()>>;
type Signature = Transaction;
type SignMachine = TransactionSignMachine;
fn preprocess<R: RngCore + CryptoRng>(
mut self,
rng: &mut R,
) -> (Self::SignMachine, Self::Preprocess) {
let mut preprocesses = Vec::with_capacity(self.sigs.len());
let sigs = self
.sigs
.drain(..)
.map(|sig| {
let (sig, preprocess) = sig.preprocess(rng);
preprocesses.push(preprocess);
sig
})
.collect();
(TransactionSignMachine { tx: self.tx, sigs }, preprocesses)
}
}
pub struct TransactionSignMachine {
tx: SignableTransaction,
sigs: Vec<AlgorithmSignMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl SignMachine<Transaction> for TransactionSignMachine {
type Params = ();
type Keys = ThresholdKeys<Secp256k1>;
type Preprocess = Vec<Preprocess<Secp256k1, ()>>;
type SignatureShare = Vec<SignatureShare<Secp256k1>>;
type SignatureMachine = TransactionSignatureMachine;
fn cache(self) -> CachedPreprocess {
unimplemented!(
"Bitcoin transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn from_cache(
_: (),
_: ThresholdKeys<Secp256k1>,
_: CachedPreprocess,
) -> Result<Self, FrostError> {
unimplemented!(
"Bitcoin transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn read_preprocess<R: Read>(&self, reader: &mut R) -> io::Result<Self::Preprocess> {
self.sigs.iter().map(|sig| sig.read_preprocess(reader)).collect()
}
fn sign(
mut self,
commitments: HashMap<Participant, Self::Preprocess>,
msg: &[u8],
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
if !msg.is_empty() {
panic!("message was passed to the TransactionMachine when it generates its own");
}
let commitments = (0 .. self.sigs.len())
.map(|c| {
commitments
.iter()
.map(|(l, commitments)| (*l, commitments[c].clone()))
.collect::<HashMap<_, _>>()
})
.collect::<Vec<_>>();
let mut cache = SighashCache::new(&self.tx.tx);
// Sign committing to all inputs
let prevouts = Prevouts::All(&self.tx.prevouts);
let mut shares = Vec::with_capacity(self.sigs.len());
let sigs = self
.sigs
.drain(..)
.enumerate()
.map(|(i, sig)| {
let (sig, share) = sig.sign(
commitments[i].clone(),
cache
.taproot_key_spend_signature_hash(i, &prevouts, TapSighashType::Default)
// This should never happen since the inputs align with the TX the cache was
// constructed with, and because i is always < prevouts.len()
.expect("taproot_key_spend_signature_hash failed to return a hash")
.as_ref(),
)?;
shares.push(share);
Ok(sig)
})
.collect::<Result<_, _>>()?;
Ok((TransactionSignatureMachine { tx: self.tx.tx, sigs }, shares))
}
}
pub struct TransactionSignatureMachine {
tx: Transaction,
sigs: Vec<AlgorithmSignatureMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl SignatureMachine<Transaction> for TransactionSignatureMachine {
type SignatureShare = Vec<SignatureShare<Secp256k1>>;
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare> {
self.sigs.iter().map(|sig| sig.read_share(reader)).collect()
}
fn complete(
mut self,
mut shares: HashMap<Participant, Self::SignatureShare>,
) -> Result<Transaction, FrostError> {
for (input, schnorr) in self.tx.input.iter_mut().zip(self.sigs.drain(..)) {
let sig = schnorr.complete(
shares.iter_mut().map(|(l, shares)| (*l, shares.remove(0))).collect::<HashMap<_, _>>(),
)?;
let mut witness = Witness::new();
witness.push(sig);
input.witness = witness;
}
Ok(self.tx)
}
}

View File

@@ -0,0 +1,25 @@
use bitcoin_serai::{bitcoin::hashes::Hash as HashTrait, rpc::RpcError};
mod runner;
use runner::rpc;
async_sequential! {
async fn test_rpc() {
let rpc = rpc().await;
// Test get_latest_block_number and get_block_hash by round tripping them
let latest = rpc.get_latest_block_number().await.unwrap();
let hash = rpc.get_block_hash(latest).await.unwrap();
assert_eq!(rpc.get_block_number(&hash).await.unwrap(), latest);
// Test this actually is the latest block number by checking asking for the next block's errors
assert!(matches!(rpc.get_block_hash(latest + 1).await, Err(RpcError::RequestError(_))));
// Test get_block by checking the received block's hash matches the request
let block = rpc.get_block(&hash).await.unwrap();
// Hashes are stored in reverse. It's bs from Satoshi
let mut block_hash = *block.block_hash().as_raw_hash().as_byte_array();
block_hash.reverse();
assert_eq!(hash, block_hash);
}
}

View File

@@ -0,0 +1,48 @@
use std::sync::OnceLock;
use bitcoin_serai::rpc::Rpc;
use tokio::sync::Mutex;
static SEQUENTIAL_CELL: OnceLock<Mutex<()>> = OnceLock::new();
#[allow(non_snake_case)]
pub fn SEQUENTIAL() -> &'static Mutex<()> {
SEQUENTIAL_CELL.get_or_init(|| Mutex::new(()))
}
#[allow(dead_code)]
pub(crate) async fn rpc() -> Rpc {
let rpc = Rpc::new("http://serai:seraidex@127.0.0.1:18443".to_string()).await.unwrap();
// If this node has already been interacted with, clear its chain
if rpc.get_latest_block_number().await.unwrap() > 0 {
rpc
.rpc_call(
"invalidateblock",
serde_json::json!([hex::encode(rpc.get_block_hash(1).await.unwrap())]),
)
.await
.unwrap()
}
rpc
}
#[macro_export]
macro_rules! async_sequential {
($(async fn $name: ident() $body: block)*) => {
$(
#[tokio::test]
async fn $name() {
let guard = runner::SEQUENTIAL().lock().await;
let local = tokio::task::LocalSet::new();
local.run_until(async move {
if let Err(err) = tokio::task::spawn_local(async move { $body }).await {
drop(guard);
Err(err).unwrap()
}
}).await;
}
)*
}
}

View File

@@ -0,0 +1,363 @@
use std::collections::HashMap;
use rand_core::{RngCore, OsRng};
use transcript::{Transcript, RecommendedTranscript};
use k256::{
elliptic_curve::{
group::{ff::Field, Group},
sec1::{Tag, ToEncodedPoint},
},
Scalar, ProjectivePoint,
};
use frost::{
curve::Secp256k1,
Participant, ThresholdKeys,
tests::{THRESHOLD, key_gen, sign_without_caching},
};
use bitcoin_serai::{
bitcoin::{
hashes::Hash as HashTrait,
blockdata::opcodes::all::OP_RETURN,
script::{PushBytesBuf, Instruction, Instructions, Script},
address::NetworkChecked,
OutPoint, Amount, TxOut, Transaction, Network, Address,
},
wallet::{
tweak_keys, address_payload, ReceivedOutput, Scanner, TransactionError, SignableTransaction,
},
rpc::Rpc,
};
mod runner;
use runner::rpc;
const FEE: u64 = 20;
fn is_even(key: ProjectivePoint) -> bool {
key.to_encoded_point(true).tag() == Tag::CompressedEvenY
}
async fn send_and_get_output(rpc: &Rpc, scanner: &Scanner, key: ProjectivePoint) -> ReceivedOutput {
let block_number = rpc.get_latest_block_number().await.unwrap() + 1;
rpc
.rpc_call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([
1,
Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap())
]),
)
.await
.unwrap();
// Mine until maturity
rpc
.rpc_call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([100, Address::p2sh(Script::new(), Network::Regtest).unwrap()]),
)
.await
.unwrap();
let block = rpc.get_block(&rpc.get_block_hash(block_number).await.unwrap()).await.unwrap();
let mut outputs = scanner.scan_block(&block);
assert_eq!(outputs, scanner.scan_transaction(&block.txdata[0]));
assert_eq!(outputs.len(), 1);
assert_eq!(outputs[0].outpoint(), &OutPoint::new(block.txdata[0].txid(), 0));
assert_eq!(outputs[0].value(), block.txdata[0].output[0].value.to_sat());
assert_eq!(
ReceivedOutput::read::<&[u8]>(&mut outputs[0].serialize().as_ref()).unwrap(),
outputs[0]
);
outputs.swap_remove(0)
}
fn keys() -> (HashMap<Participant, ThresholdKeys<Secp256k1>>, ProjectivePoint) {
let mut keys = key_gen(&mut OsRng);
for (_, keys) in keys.iter_mut() {
*keys = tweak_keys(keys);
}
let key = keys.values().next().unwrap().group_key();
(keys, key)
}
fn sign(
keys: &HashMap<Participant, ThresholdKeys<Secp256k1>>,
tx: SignableTransaction,
) -> Transaction {
let mut machines = HashMap::new();
for i in (1 ..= THRESHOLD).map(|i| Participant::new(i).unwrap()) {
machines.insert(
i,
tx.clone()
.multisig(keys[&i].clone(), RecommendedTranscript::new(b"bitcoin-serai Test Transaction"))
.unwrap(),
);
}
sign_without_caching(&mut OsRng, machines, &[])
}
#[test]
fn test_tweak_keys() {
let mut even = false;
let mut odd = false;
// Generate keys until we get an even set and an odd set
while !(even && odd) {
let mut keys = key_gen(&mut OsRng).drain().next().unwrap().1;
if is_even(keys.group_key()) {
// Tweaking should do nothing
assert_eq!(tweak_keys(&keys).group_key(), keys.group_key());
even = true;
} else {
let tweaked = tweak_keys(&keys).group_key();
assert_ne!(tweaked, keys.group_key());
// Tweaking should produce an even key
assert!(is_even(tweaked));
// Verify it uses the smallest possible offset
while keys.group_key().to_encoded_point(true).tag() == Tag::CompressedOddY {
keys = keys.offset(Scalar::ONE);
}
assert_eq!(tweaked, keys.group_key());
odd = true;
}
}
}
async_sequential! {
async fn test_scanner() {
// Test Scanners are creatable for even keys.
for _ in 0 .. 128 {
let key = ProjectivePoint::random(&mut OsRng);
assert_eq!(Scanner::new(key).is_some(), is_even(key));
}
let mut key = ProjectivePoint::random(&mut OsRng);
while !is_even(key) {
key += ProjectivePoint::GENERATOR;
}
{
let mut scanner = Scanner::new(key).unwrap();
for _ in 0 .. 128 {
let mut offset = Scalar::random(&mut OsRng);
let registered = scanner.register_offset(offset).unwrap();
// Registering this again should return None
assert!(scanner.register_offset(offset).is_none());
// We can only register offsets resulting in even keys
// Make this even
while !is_even(key + (ProjectivePoint::GENERATOR * offset)) {
offset += Scalar::ONE;
}
// Ensure it matches the registered offset
assert_eq!(registered, offset);
// Assert registering this again fails
assert!(scanner.register_offset(offset).is_none());
}
}
let rpc = rpc().await;
let mut scanner = Scanner::new(key).unwrap();
assert_eq!(send_and_get_output(&rpc, &scanner, key).await.offset(), Scalar::ZERO);
// Register an offset and test receiving to it
let offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
assert_eq!(
send_and_get_output(&rpc, &scanner, key + (ProjectivePoint::GENERATOR * offset))
.await
.offset(),
offset
);
}
async fn test_transaction_errors() {
let (_, key) = keys();
let rpc = rpc().await;
let scanner = Scanner::new(key).unwrap();
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let inputs = vec![output];
let addr = || Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap());
let payments = vec![(addr(), 1000)];
assert!(SignableTransaction::new(inputs.clone(), &payments, None, None, FEE).is_ok());
assert_eq!(
SignableTransaction::new(vec![], &payments, None, None, FEE),
Err(TransactionError::NoInputs)
);
// No change
assert!(SignableTransaction::new(inputs.clone(), &[(addr(), 1000)], None, None, FEE).is_ok());
// Consolidation TX
assert!(SignableTransaction::new(inputs.clone(), &[], Some(addr()), None, FEE).is_ok());
// Data
assert!(SignableTransaction::new(inputs.clone(), &[], None, Some(vec![]), FEE).is_ok());
// No outputs
assert_eq!(
SignableTransaction::new(inputs.clone(), &[], None, None, FEE),
Err(TransactionError::NoOutputs),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[(addr(), 1)], None, None, FEE),
Err(TransactionError::DustPayment),
);
assert!(
SignableTransaction::new(inputs.clone(), &payments, None, Some(vec![0; 80]), FEE).is_ok()
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &payments, None, Some(vec![0; 81]), FEE),
Err(TransactionError::TooMuchData),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[], Some(addr()), None, 0),
Err(TransactionError::TooLowFee),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[(addr(), inputs[0].value() * 2)], None, None, FEE),
Err(TransactionError::NotEnoughFunds),
);
assert_eq!(
SignableTransaction::new(inputs, &vec![(addr(), 1000); 10000], None, None, FEE),
Err(TransactionError::TooLargeTransaction),
);
}
async fn test_send() {
let (keys, key) = keys();
let rpc = rpc().await;
let mut scanner = Scanner::new(key).unwrap();
// Get inputs, one not offset and one offset
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
let offset_key = key + (ProjectivePoint::GENERATOR * offset);
let offset_output = send_and_get_output(&rpc, &scanner, offset_key).await;
assert_eq!(offset_output.offset(), offset);
// Declare payments, change, fee
let payments = [
(Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap()), 1005),
(Address::<NetworkChecked>::new(Network::Regtest, address_payload(offset_key).unwrap()), 1007)
];
let change_offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
let change_key = key + (ProjectivePoint::GENERATOR * change_offset);
let change_addr =
Address::<NetworkChecked>::new(Network::Regtest, address_payload(change_key).unwrap());
// Create and sign the TX
let tx = SignableTransaction::new(
vec![output.clone(), offset_output.clone()],
&payments,
Some(change_addr.clone()),
None,
FEE
).unwrap();
let needed_fee = tx.needed_fee();
let tx = sign(&keys, tx);
assert_eq!(tx.output.len(), 3);
// Ensure we can scan it
let outputs = scanner.scan_transaction(&tx);
for (o, output) in outputs.iter().enumerate() {
assert_eq!(output.outpoint(), &OutPoint::new(tx.txid(), u32::try_from(o).unwrap()));
assert_eq!(&ReceivedOutput::read::<&[u8]>(&mut output.serialize().as_ref()).unwrap(), output);
}
assert_eq!(outputs[0].offset(), Scalar::ZERO);
assert_eq!(outputs[1].offset(), offset);
assert_eq!(outputs[2].offset(), change_offset);
// Make sure the payments were properly created
for ((output, scanned), payment) in tx.output.iter().zip(outputs.iter()).zip(payments.iter()) {
assert_eq!(
output,
&TxOut { script_pubkey: payment.0.script_pubkey(), value: Amount::from_sat(payment.1) },
);
assert_eq!(scanned.value(), payment.1 );
}
// Make sure the change is correct
assert_eq!(needed_fee, u64::from(tx.weight()) * FEE);
let input_value = output.value() + offset_output.value();
let output_value = tx.output.iter().map(|output| output.value.to_sat()).sum::<u64>();
assert_eq!(input_value - output_value, needed_fee);
let change_amount =
input_value - payments.iter().map(|payment| payment.1).sum::<u64>() - needed_fee;
assert_eq!(
tx.output[2],
TxOut { script_pubkey: change_addr.script_pubkey(), value: Amount::from_sat(change_amount) },
);
// This also tests send_raw_transaction and get_transaction, which the RPC test can't
// effectively test
rpc.send_raw_transaction(&tx).await.unwrap();
let mut hash = *tx.txid().as_raw_hash().as_byte_array();
hash.reverse();
assert_eq!(tx, rpc.get_transaction(&hash).await.unwrap());
}
async fn test_data() {
let (keys, key) = keys();
let rpc = rpc().await;
let scanner = Scanner::new(key).unwrap();
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let data_len = 60 + usize::try_from(OsRng.next_u64() % 21).unwrap();
let mut data = vec![0; data_len];
OsRng.fill_bytes(&mut data);
let tx = sign(
&keys,
SignableTransaction::new(
vec![output],
&[],
Some(Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap())),
Some(data.clone()),
FEE
).unwrap()
);
assert!(tx.output[0].script_pubkey.is_op_return());
let check = |mut instructions: Instructions| {
assert_eq!(instructions.next().unwrap().unwrap(), Instruction::Op(OP_RETURN));
assert_eq!(
instructions.next().unwrap().unwrap(),
Instruction::PushBytes(&PushBytesBuf::try_from(data.clone()).unwrap()),
);
assert!(instructions.next().is_none());
};
check(tx.output[0].script_pubkey.instructions());
check(tx.output[0].script_pubkey.instructions_minimal());
}
}

View File

@@ -7,31 +7,33 @@ repository = "https://github.com/serai-dex/serai/tree/develop/coins/ethereum"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Elizabeth Binks <elizabethjbinks@gmail.com>"]
edition = "2021"
publish = false
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
hex-literal = "0.3"
thiserror = "1"
rand_core = "0.6"
thiserror = { version = "1", default-features = false }
eyre = { version = "0.6", default-features = false }
serde_json = "1.0"
serde = "1.0"
sha3 = { version = "0.10", default-features = false, features = ["std"] }
sha3 = "0.10"
group = "0.12"
k256 = { version = "0.11", features = ["arithmetic", "keccak256", "ecdsa"] }
group = { version = "0.13", default-features = false }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["secp256k1", "tests"] }
eyre = "0.6"
ethers = { version = "1", features = ["abigen", "ethers-solc"] }
[build-dependencies]
ethers-solc = "1"
ethers-core = { version = "2", default-features = false }
ethers-providers = { version = "2", default-features = false }
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
serde = { version = "1", default-features = false, features = ["std"] }
serde_json = { version = "1", default-features = false, features = ["std"] }
sha2 = { version = "0.10", default-features = false, features = ["std"] }
tokio = { version = "1", features = ["macros"] }

View File

@@ -1,16 +1,15 @@
use ethers_solc::{Project, ProjectPathsConfig};
fn main() {
println!("cargo:rerun-if-changed=contracts");
println!("cargo:rerun-if-changed=artifacts");
// configure the project with all its paths, solc, cache etc.
let project = Project::builder()
.paths(ProjectPathsConfig::hardhat(env!("CARGO_MANIFEST_DIR")).unwrap())
.build()
.unwrap();
project.compile().unwrap();
#[rustfmt::skip]
let args = [
"--base-path", ".",
"-o", "./artifacts", "--overwrite",
"--bin", "--abi",
"--optimize",
"./contracts/Schnorr.sol"
];
// Tell Cargo that if a source file changes, to rerun this build script.
project.rerun_if_sources_changed();
assert!(std::process::Command::new("solc").args(args).status().unwrap().success());
}

View File

@@ -1,9 +1,10 @@
use crate::crypto::ProcessedSignature;
use ethers::{contract::ContractFactory, prelude::*, solc::artifacts::contract::ContractBytecode};
use eyre::{eyre, Result};
use std::fs::File;
use std::sync::Arc;
use thiserror::Error;
use eyre::{eyre, Result};
use ethers_providers::{Provider, Http};
use ethers_contract::abigen;
use crate::crypto::ProcessedSignature;
#[derive(Error, Debug)]
pub enum EthereumError {
@@ -11,27 +12,10 @@ pub enum EthereumError {
VerificationError,
}
abigen!(
Schnorr,
"./artifacts/Schnorr.sol/Schnorr.json",
event_derives(serde::Deserialize, serde::Serialize),
);
pub async fn deploy_schnorr_verifier_contract(
client: Arc<SignerMiddleware<Provider<Http>, LocalWallet>>,
) -> Result<Schnorr<SignerMiddleware<Provider<Http>, LocalWallet>>> {
let path = "./artifacts/Schnorr.sol/Schnorr.json";
let artifact: ContractBytecode = serde_json::from_reader(File::open(path).unwrap()).unwrap();
let abi = artifact.abi.unwrap();
let bin = artifact.bytecode.unwrap().object;
let factory = ContractFactory::new(abi, bin.into_bytes().unwrap(), client.clone());
let contract = factory.deploy(())?.send().await?;
let contract = Schnorr::new(contract.address(), client);
Ok(contract)
}
abigen!(Schnorr, "./artifacts/Schnorr.abi");
pub async fn call_verify(
contract: &Schnorr<SignerMiddleware<Provider<Http>, LocalWallet>>,
contract: &Schnorr<Provider<Http>>,
params: &ProcessedSignature,
) -> Result<()> {
if contract

View File

@@ -2,18 +2,20 @@ use sha3::{Digest, Keccak256};
use group::Group;
use k256::{
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, sec1::ToEncodedPoint, DecompressPoint},
elliptic_curve::{
bigint::ArrayEncoding, ops::Reduce, point::DecompressPoint, sec1::ToEncodedPoint,
},
AffinePoint, ProjectivePoint, Scalar, U256,
};
use frost::{algorithm::Hram, curve::Secp256k1};
pub fn keccak256(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).try_into().unwrap()
Keccak256::digest(data).into()
}
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar::from_uint_reduced(U256::from_be_slice(&keccak256(data)))
Scalar::reduce(U256::from_be_slice(&keccak256(data)))
}
pub fn address(point: &ProjectivePoint) -> [u8; 20] {
@@ -56,7 +58,7 @@ impl Hram<Secp256k1> for EthereumHram {
let mut data = address(R).to_vec();
data.append(&mut a_encoded);
data.append(&mut m.to_vec());
Scalar::from_uint_reduced(U256::from_be_slice(&keccak256(&data)))
Scalar::reduce(U256::from_be_slice(&keccak256(&data)))
}
}
@@ -92,7 +94,7 @@ pub fn process_signature_for_contract(
) -> ProcessedSignature {
let encoded_pk = A.to_encoded_point(true);
let px = &encoded_pk.as_ref()[1 .. 33];
let px_scalar = Scalar::from_uint_reduced(U256::from_be_slice(px));
let px_scalar = Scalar::reduce(U256::from_be_slice(px));
let e = EthereumHram::hram(R, A, &[chain_id.to_be_byte_array().as_slice(), &m].concat());
ProcessedSignature {
s,

View File

@@ -1,36 +1,94 @@
use std::{convert::TryFrom, sync::Arc, time::Duration};
use std::{convert::TryFrom, sync::Arc, time::Duration, fs::File};
use rand_core::OsRng;
use k256::{elliptic_curve::bigint::ArrayEncoding, U256};
use ::k256::{
elliptic_curve::{bigint::ArrayEncoding, PrimeField},
U256,
};
use ethers::{
prelude::*,
use ethers_core::{
types::Signature,
abi::Abi,
utils::{keccak256, Anvil, AnvilInstance},
};
use ethers_contract::ContractFactory;
use ethers_providers::{Middleware, Provider, Http};
use frost::{
curve::Secp256k1,
algorithm::Schnorr as Algo,
Participant,
algorithm::IetfSchnorr,
tests::{key_gen, algorithm_machines, sign},
};
use ethereum_serai::{
crypto,
contract::{Schnorr, call_verify, deploy_schnorr_verifier_contract},
contract::{Schnorr, call_verify},
};
async fn deploy_test_contract(
) -> (u32, AnvilInstance, Schnorr<SignerMiddleware<Provider<Http>, LocalWallet>>) {
// TODO: Replace with a contract deployment from an unknown account, so the environment solely has
// to fund the deployer, not create/pass a wallet
pub async fn deploy_schnorr_verifier_contract(
chain_id: u32,
client: Arc<Provider<Http>>,
wallet: &k256::ecdsa::SigningKey,
) -> eyre::Result<Schnorr<Provider<Http>>> {
let abi: Abi = serde_json::from_reader(File::open("./artifacts/Schnorr.abi").unwrap()).unwrap();
let hex_bin_buf = std::fs::read_to_string("./artifacts/Schnorr.bin").unwrap();
let hex_bin =
if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf };
let bin = hex::decode(hex_bin).unwrap();
let factory = ContractFactory::new(abi, bin.into(), client.clone());
let mut deployment_tx = factory.deploy(())?.tx;
deployment_tx.set_chain_id(chain_id);
deployment_tx.set_gas(500_000);
let (max_fee_per_gas, max_priority_fee_per_gas) = client.estimate_eip1559_fees(None).await?;
deployment_tx.as_eip1559_mut().unwrap().max_fee_per_gas = Some(max_fee_per_gas);
deployment_tx.as_eip1559_mut().unwrap().max_priority_fee_per_gas = Some(max_priority_fee_per_gas);
let sig_hash = deployment_tx.sighash();
let (sig, rid) = wallet.sign_prehash_recoverable(sig_hash.as_ref()).unwrap();
// EIP-155 v
let mut v = u64::from(rid.to_byte());
assert!((v == 0) || (v == 1));
v += u64::from((chain_id * 2) + 35);
let r = sig.r().to_repr();
let r_ref: &[u8] = r.as_ref();
let s = sig.s().to_repr();
let s_ref: &[u8] = s.as_ref();
let deployment_tx = deployment_tx.rlp_signed(&Signature { r: r_ref.into(), s: s_ref.into(), v });
let pending_tx = client.send_raw_transaction(deployment_tx).await?;
let mut receipt;
while {
receipt = client.get_transaction_receipt(pending_tx.tx_hash()).await?;
receipt.is_none()
} {
tokio::time::sleep(Duration::from_secs(6)).await;
}
let receipt = receipt.unwrap();
assert!(receipt.status == Some(1.into()));
let contract = Schnorr::new(receipt.contract_address.unwrap(), client.clone());
Ok(contract)
}
async fn deploy_test_contract() -> (u32, AnvilInstance, Schnorr<Provider<Http>>) {
let anvil = Anvil::new().spawn();
let wallet: LocalWallet = anvil.keys()[0].clone().into();
let provider =
Provider::<Http>::try_from(anvil.endpoint()).unwrap().interval(Duration::from_millis(10u64));
let chain_id = provider.get_chainid().await.unwrap().as_u32();
let client = Arc::new(SignerMiddleware::new_with_provider_chain(provider, wallet).await.unwrap());
let wallet = anvil.keys()[0].clone().into();
let client = Arc::new(provider);
(chain_id, anvil, deploy_schnorr_verifier_contract(client).await.unwrap())
(chain_id, anvil, deploy_schnorr_verifier_contract(chain_id, client, &wallet).await.unwrap())
}
#[tokio::test]
@@ -44,14 +102,14 @@ async fn test_ecrecover_hack() {
let chain_id = U256::from(chain_id);
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&1].group_key();
let group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = Algo::<Secp256k1, crypto::EthereumHram>::new();
let algo = IetfSchnorr::<Secp256k1, crypto::EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
algo.clone(),

View File

@@ -1,51 +1,57 @@
use ethereum_serai::crypto::*;
use frost::curve::Secp256k1;
use k256::{
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, sec1::ToEncodedPoint},
ProjectivePoint, Scalar, U256,
};
use frost::{curve::Secp256k1, Participant};
use ethereum_serai::crypto::*;
#[test]
fn test_ecrecover() {
use k256::ecdsa::{
recoverable::Signature,
signature::{Signer, Verifier},
SigningKey, VerifyingKey,
};
use rand_core::OsRng;
use sha2::Sha256;
use sha3::{Digest, Keccak256};
use k256::ecdsa::{hazmat::SignPrimitive, signature::DigestVerifier, SigningKey, VerifyingKey};
let private = SigningKey::random(&mut OsRng);
let public = VerifyingKey::from(&private);
const MESSAGE: &[u8] = b"Hello, World!";
let sig: Signature = private.sign(MESSAGE);
public.verify(MESSAGE, &sig).unwrap();
let (sig, recovery_id) = private
.as_nonzero_scalar()
.try_sign_prehashed_rfc6979::<Sha256>(&Keccak256::digest(MESSAGE), b"")
.unwrap();
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
{
assert_eq!(public.verify_digest(Keccak256::new_with_prefix(MESSAGE), &sig).unwrap(), ());
}
assert_eq!(
ecrecover(hash_to_scalar(MESSAGE), sig.as_ref()[64], *sig.r(), *sig.s()).unwrap(),
address(&ProjectivePoint::from(public))
ecrecover(hash_to_scalar(MESSAGE), recovery_id.unwrap().is_y_odd().into(), *sig.r(), *sig.s())
.unwrap(),
address(&ProjectivePoint::from(public.as_affine()))
);
}
#[test]
fn test_signing() {
use frost::{
algorithm::Schnorr,
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let _group_key = keys[&1].group_key();
let _group_key = keys[&Participant::new(1).unwrap()].group_key();
const MESSAGE: &[u8] = b"Hello, World!";
let algo = Schnorr::<Secp256k1, EthereumHram>::new();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let _sig = sign(
&mut OsRng,
algo,
keys.clone(),
algorithm_machines(&mut OsRng, Schnorr::<Secp256k1, EthereumHram>::new(), &keys),
algorithm_machines(&mut OsRng, IetfSchnorr::<Secp256k1, EthereumHram>::ietf(), &keys),
MESSAGE,
);
}
@@ -53,16 +59,16 @@ fn test_signing() {
#[test]
fn test_ecrecover_hack() {
use frost::{
algorithm::Schnorr,
algorithm::IetfSchnorr,
tests::{algorithm_machines, key_gen, sign},
};
use rand_core::OsRng;
let keys = key_gen::<_, Secp256k1>(&mut OsRng);
let group_key = keys[&1].group_key();
let group_key = keys[&Participant::new(1).unwrap()].group_key();
let group_key_encoded = group_key.to_encoded_point(true);
let group_key_compressed = group_key_encoded.as_ref();
let group_key_x = Scalar::from_uint_reduced(U256::from_be_slice(&group_key_compressed[1 .. 33]));
let group_key_x = Scalar::reduce(U256::from_be_slice(&group_key_compressed[1 .. 33]));
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
@@ -70,7 +76,7 @@ fn test_ecrecover_hack() {
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = Schnorr::<Secp256k1, EthereumHram>::new();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
algo.clone(),

View File

@@ -1,62 +1,110 @@
[package]
name = "monero-serai"
version = "0.1.2-alpha"
version = "0.1.4-alpha"
description = "A modern Monero transaction library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
hex-literal = "0.3"
lazy_static = "1"
thiserror = "1"
std-shims = { path = "../../common/std-shims", version = "^0.1.1", default-features = false }
rand_core = "0.6"
rand_chacha = { version = "0.3", optional = true }
rand = "0.8"
rand_distr = "0.4"
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", default-features = false, optional = true }
zeroize = { version = "1.5", features = ["zeroize_derive"] }
subtle = "2.4"
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
subtle = { version = "^2.4", default-features = false }
sha3 = "0.10"
blake2 = { version = "0.10", optional = true }
rand_core = { version = "0.6", default-features = false }
# Used to send transactions
rand = { version = "0.8", default-features = false }
rand_chacha = { version = "0.3", default-features = false }
# Used to select decoys
rand_distr = { version = "0.4", default-features = false }
curve25519-dalek = { version = "3", features = ["std"] }
sha3 = { version = "0.10", default-features = false }
pbkdf2 = { version = "0.12", features = ["simple"], default-features = false }
group = { version = "0.12" }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.1" }
multiexp = { path = "../../crypto/multiexp", version = "0.2", features = ["batch"] }
curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize", "precomputed-tables"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.2", features = ["recommended"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.5", features = ["ed25519"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.2", features = ["serialize"], optional = true }
# Used for the hash to curve, along with the more complicated proofs
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
multiexp = { path = "../../crypto/multiexp", version = "0.4", default-features = false, features = ["batch"] }
monero-generators = { path = "generators", version = "0.1" }
# Needed for multisig
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.4", default-features = false, features = ["serialize"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.8", default-features = false, features = ["ed25519"], optional = true }
hex = "0.4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
monero-generators = { path = "generators", version = "0.4", default-features = false }
base58-monero = "1"
monero-epee-bin-serde = "1.0"
futures = { version = "0.3", default-features = false, features = ["alloc"], optional = true }
digest_auth = "0.3"
reqwest = { version = "0.11", features = ["json"] }
hex-literal = "0.4"
hex = { version = "0.4", default-features = false, features = ["alloc"] }
serde = { version = "1", default-features = false, features = ["derive", "alloc"] }
serde_json = { version = "1", default-features = false, features = ["alloc"] }
base58-monero = { version = "2", default-features = false, features = ["check"] }
# Used for the provided HTTP RPC
digest_auth = { version = "0.3", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.1", default-features = false, features = ["tls"], optional = true }
tokio = { version = "1", default-features = false, optional = true }
[build-dependencies]
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.1" }
monero-generators = { path = "generators", version = "0.1" }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
monero-generators = { path = "generators", version = "0.4", default-features = false }
[dev-dependencies]
tokio = { version = "1", features = ["full"] }
tokio = { version = "1", features = ["sync", "macros"] }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.5", features = ["ed25519", "tests"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["tests"] }
[features]
multisig = ["rand_chacha", "blake2", "transcript", "frost", "dleq"]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"subtle/std",
"rand_core/std",
"rand/std",
"rand_chacha/std",
"rand_distr/std",
"sha3/std",
"pbkdf2/std",
"multiexp/std",
"transcript/std",
"dleq/std",
"monero-generators/std",
"futures?/std",
"hex/std",
"serde/std",
"serde_json/std",
"base58-monero/std",
]
cache-distribution = ["futures"]
http-rpc = ["digest_auth", "simple-request", "tokio"]
multisig = ["transcript", "frost", "dleq", "std"]
binaries = ["tokio/rt-multi-thread", "tokio/macros", "http-rpc"]
experimental = []
default = ["std", "http-rpc"]

View File

@@ -4,16 +4,46 @@ A modern Monero transaction library intended for usage in wallets. It prides
itself on accuracy, correctness, and removing common pit falls developers may
face.
monero-serai contains safety features, such as first-class acknowledgement of
the burning bug, yet also a high level API around creating transactions.
monero-serai also offers a FROST-based multisig, which is orders of magnitude
more performant than Monero's.
monero-serai also offers the following features:
- Featured Addresses
- A FROST-based multisig orders of magnitude more performant than Monero's
### Purpose and support
monero-serai was written for Serai, a decentralized exchange aiming to support
Monero. Despite this, monero-serai is intended to be a widely usable library,
accurate to Monero. monero-serai guarantees the functionality needed for Serai,
yet will not deprive functionality from other users, and may potentially leave
Serai's umbrella at some point.
yet will not deprive functionality from other users.
Various legacy transaction formats are not currently implemented, yet
monero-serai is still increasing its support for various transaction types.
Various legacy transaction formats are not currently implemented, yet we are
willing to add support for them. There aren't active development efforts around
them however.
### Caveats
This library DOES attempt to do the following:
- Create on-chain transactions identical to how wallet2 would (unless told not
to)
- Not be detectable as monero-serai when scanning outputs
- Not reveal spent outputs to the connected RPC node
This library DOES NOT attempt to do the following:
- Have identical RPC behavior when creating transactions
- Be a wallet
This means that monero-serai shouldn't be fingerprintable on-chain. It also
shouldn't be fingerprintable if a targeted attack occurs to detect if the
receiving wallet is monero-serai or wallet2. It also should be generally safe
for usage with remote nodes.
It won't hide from remote nodes it's monero-serai however, potentially
allowing a remote node to profile you. The implications of this are left to the
user to consider.
It also won't act as a wallet, just as a transaction library. wallet2 has
several *non-transaction-level* policies, such as always attempting to use two
inputs to create transactions. These are considered out of scope to
monero-serai.

View File

@@ -41,15 +41,16 @@ fn generators(prefix: &'static str, path: &str) {
.write_all(
format!(
"
lazy_static! {{
pub static ref GENERATORS: Generators = Generators {{
G: [
pub(crate) static GENERATORS_CELL: OnceLock<Generators> = OnceLock::new();
pub fn GENERATORS() -> &'static Generators {{
GENERATORS_CELL.get_or_init(|| Generators {{
G: vec![
{G_str}
],
H: [
H: vec![
{H_str}
],
}};
}})
}}
",
)

View File

@@ -1,6 +1,6 @@
[package]
name = "monero-generators"
version = "0.1.1"
version = "0.4.0"
description = "Monero's hash_to_point and generators"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero/generators"
@@ -12,13 +12,17 @@ all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[dependencies]
lazy_static = "1"
std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false }
subtle = "2.4"
subtle = { version = "^2.4", default-features = false }
sha3 = "0.10"
sha3 = { version = "0.10", default-features = false }
curve25519-dalek = { version = "3", features = ["std"] }
curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize", "precomputed-tables"] }
group = "0.12"
dalek-ff-group = { path = "../../../crypto/dalek-ff-group", version = "0.1.4" }
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../../crypto/dalek-ff-group", version = "0.4", default-features = false }
[features]
std = ["std-shims/std", "subtle/std", "sha3/std", "dalek-ff-group/std"]
default = ["std"]

View File

@@ -3,3 +3,5 @@
Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
`hash_to_point` here, is included, as needed to generate generators.
This library is usable under no_std when the `alloc` feature is enabled.

View File

@@ -3,7 +3,7 @@ use subtle::ConditionallySelectable;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use group::ff::{Field, PrimeField};
use dalek_ff_group::field::FieldElement;
use dalek_ff_group::FieldElement;
use crate::hash;
@@ -13,7 +13,7 @@ pub fn hash_to_point(bytes: [u8; 32]) -> EdwardsPoint {
let A = FieldElement::from(486662u64);
let v = FieldElement::from_square(hash(&bytes)).double();
let w = v + FieldElement::one();
let w = v + FieldElement::ONE;
let x = w.square() + (-A.square() * v);
// This isn't the complete X, yet its initial value

View File

@@ -1,17 +1,17 @@
//! Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
//!
//! An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
//! `hash_to_point` here, is included, as needed to generate generators.
use lazy_static::lazy_static;
#![cfg_attr(not(feature = "std"), no_std)]
use std_shims::{sync::OnceLock, vec::Vec};
use sha3::{Digest, Keccak256};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_POINT,
edwards::{EdwardsPoint as DalekPoint, CompressedEdwardsY},
};
use curve25519_dalek::edwards::{EdwardsPoint as DalekPoint, CompressedEdwardsY};
use group::Group;
use group::{Group, GroupEncoding};
use dalek_ff_group::EdwardsPoint;
mod varint;
@@ -24,13 +24,29 @@ fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
lazy_static! {
/// Monero alternate generator `H`, used for amounts in Pedersen commitments.
pub static ref H: DalekPoint =
CompressedEdwardsY(hash(&ED25519_BASEPOINT_POINT.compress().to_bytes()))
static H_CELL: OnceLock<DalekPoint> = OnceLock::new();
/// Monero's alternate generator `H`, used for amounts in Pedersen commitments.
#[allow(non_snake_case)]
pub fn H() -> DalekPoint {
*H_CELL.get_or_init(|| {
CompressedEdwardsY(hash(&EdwardsPoint::generator().to_bytes()))
.decompress()
.unwrap()
.mul_by_cofactor();
.mul_by_cofactor()
})
}
static H_POW_2_CELL: OnceLock<[DalekPoint; 64]> = OnceLock::new();
/// Monero's alternate generator `H`, multiplied by 2**i for i in 1 ..= 64.
#[allow(non_snake_case)]
pub fn H_pow_2() -> &'static [DalekPoint; 64] {
H_POW_2_CELL.get_or_init(|| {
let mut res = [H(); 64];
for i in 1 .. 64 {
res[i] = res[i - 1] + res[i - 1];
}
res
})
}
const MAX_M: usize = 16;
@@ -40,25 +56,24 @@ const MAX_MN: usize = MAX_M * N;
/// Container struct for Bulletproofs(+) generators.
#[allow(non_snake_case)]
pub struct Generators {
pub G: [EdwardsPoint; MAX_MN],
pub H: [EdwardsPoint; MAX_MN],
pub G: Vec<EdwardsPoint>,
pub H: Vec<EdwardsPoint>,
}
/// Generate generators as needed for Bulletproofs(+), as Monero does.
pub fn bulletproofs_generators(dst: &'static [u8]) -> Generators {
let mut res =
Generators { G: [EdwardsPoint::identity(); MAX_MN], H: [EdwardsPoint::identity(); MAX_MN] };
let mut res = Generators { G: Vec::with_capacity(MAX_MN), H: Vec::with_capacity(MAX_MN) };
for i in 0 .. MAX_MN {
let i = 2 * i;
let mut even = H.compress().to_bytes().to_vec();
let mut even = H().compress().to_bytes().to_vec();
even.extend(dst);
let mut odd = even.clone();
write_varint(&i.try_into().unwrap(), &mut even).unwrap();
write_varint(&(i + 1).try_into().unwrap(), &mut odd).unwrap();
res.H[i / 2] = EdwardsPoint(hash_to_point(hash(&even)));
res.G[i / 2] = EdwardsPoint(hash_to_point(hash(&odd)));
res.H.push(EdwardsPoint(hash_to_point(hash(&even))));
res.G.push(EdwardsPoint(hash_to_point(hash(&odd))));
}
res
}

View File

@@ -1,4 +1,4 @@
use std::io::{self, Write};
use std_shims::io::{self, Write};
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {

View File

@@ -0,0 +1,323 @@
#[cfg(feature = "binaries")]
mod binaries {
pub(crate) use std::sync::Arc;
pub(crate) use curve25519_dalek::{
scalar::Scalar,
edwards::{CompressedEdwardsY, EdwardsPoint},
};
pub(crate) use multiexp::BatchVerifier;
pub(crate) use serde::Deserialize;
pub(crate) use serde_json::json;
pub(crate) use monero_serai::{
Commitment,
ringct::RctPrunable,
transaction::{Input, Transaction},
block::Block,
rpc::{RpcError, Rpc, HttpRpc},
};
pub(crate) use tokio::task::JoinHandle;
pub(crate) async fn check_block(rpc: Arc<Rpc<HttpRpc>>, block_i: usize) {
let hash = loop {
match rpc.get_block_hash(block_i).await {
Ok(hash) => break hash,
Err(RpcError::ConnectionError(e)) => {
println!("get_block_hash ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i}'s hash: {e:?}"),
}
};
// TODO: Grab the JSON to also check it was deserialized correctly
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse = loop {
match rpc.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await {
Ok(res) => break res,
Err(RpcError::ConnectionError(e)) => {
println!("get_block ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i} via block.hash(): {e:?}"),
}
};
let blob = hex::decode(res.blob).expect("node returned non-hex block");
let block = Block::read(&mut blob.as_slice())
.unwrap_or_else(|e| panic!("couldn't deserialize block {block_i}: {e}"));
assert_eq!(block.hash(), hash, "hash differs");
assert_eq!(block.serialize(), blob, "serialization differs");
let txs_len = 1 + block.txs.len();
if !block.txs.is_empty() {
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
let mut hashes_hex = block.txs.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = vec![];
while !hashes_hex.is_empty() {
let txs: TransactionsResponse = loop {
match rpc
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. hashes_hex.len().min(100)).collect::<Vec<_>>(),
})),
)
.await
{
Ok(txs) => break txs,
Err(RpcError::ConnectionError(e)) => {
println!("get_transactions ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't call get_transactions: {e:?}"),
}
};
assert!(txs.missed_tx.is_empty());
all_txs.extend(txs.txs);
}
let mut batch = BatchVerifier::new(block.txs.len());
for (tx_hash, tx_res) in block.txs.into_iter().zip(all_txs) {
assert_eq!(
tx_res.tx_hash,
hex::encode(tx_hash),
"node returned a transaction with different hash"
);
let tx = Transaction::read(
&mut hex::decode(&tx_res.as_hex).expect("node returned non-hex transaction").as_slice(),
)
.expect("couldn't deserialize transaction");
assert_eq!(
hex::encode(tx.serialize()),
tx_res.as_hex,
"Transaction serialization was different"
);
assert_eq!(tx.hash(), tx_hash, "Transaction hash was different");
if matches!(tx.rct_signatures.prunable, RctPrunable::Null) {
assert_eq!(tx.prefix.version, 1);
assert!(!tx.signatures.is_empty());
continue;
}
let sig_hash = tx.signature_hash();
// Verify all proofs we support proving for
// This is due to having debug_asserts calling verify within their proving, and CLSAG
// multisig explicitly calling verify as part of its signing process
// Accordingly, making sure our signature_hash algorithm is correct is great, and further
// making sure the verification functions are valid is appreciated
match tx.rct_signatures.prunable {
RctPrunable::Null |
RctPrunable::AggregateMlsagBorromean { .. } |
RctPrunable::MlsagBorromean { .. } => {}
RctPrunable::MlsagBulletproofs { bulletproofs, .. } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
}
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
for (i, clsag) in clsags.into_iter().enumerate() {
let (amount, key_offsets, image) = match &tx.prefix.inputs[i] {
Input::Gen(_) => panic!("Input::Gen"),
Input::ToKey { amount, key_offsets, key_image } => (amount, key_offsets, key_image),
};
let mut running_sum = 0;
let mut actual_indexes = vec![];
for offset in key_offsets {
running_sum += offset;
actual_indexes.push(running_sum);
}
async fn get_outs(
rpc: &Rpc<HttpRpc>,
amount: u64,
indexes: &[u64],
) -> Vec<[EdwardsPoint; 2]> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = loop {
match rpc
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": amount,
"index": o
})).collect::<Vec<_>>()
})),
)
.await
{
Ok(outs) => break outs,
Err(RpcError::ConnectionError(e)) => {
println!("get_outs ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't connect to RPC to get outs: {e:?}"),
}
};
let rpc_point = |point: &str| {
CompressedEdwardsY(
hex::decode(point)
.expect("invalid hex for ring member")
.try_into()
.expect("invalid point len for ring member"),
)
.decompress()
.expect("invalid point for ring member")
};
outs
.outs
.iter()
.map(|out| {
let mask = rpc_point(&out.mask);
if amount != 0 {
assert_eq!(mask, Commitment::new(Scalar::from(1u8), amount).calculate());
}
[rpc_point(&out.key), mask]
})
.collect()
}
clsag
.verify(
&get_outs(&rpc, amount.unwrap_or(0), &actual_indexes).await,
image,
&pseudo_outs[i],
&sig_hash,
)
.unwrap();
}
}
}
}
assert!(batch.verify_vartime());
}
println!("Deserialized, hashed, and reserialized {block_i} with {} TXs", txs_len);
}
}
#[cfg(feature = "binaries")]
#[tokio::main]
async fn main() {
use binaries::*;
let args = std::env::args().collect::<Vec<String>>();
// Read start block as the first arg
let mut block_i = args[1].parse::<usize>().expect("invalid start block");
// How many blocks to work on at once
let async_parallelism: usize =
args.get(2).unwrap_or(&"8".to_string()).parse::<usize>().expect("invalid parallelism argument");
// Read further args as RPC URLs
let default_nodes = vec![
"http://xmr-node.cakewallet.com:18081".to_string(),
"https://node.sethforprivacy.com".to_string(),
];
let mut specified_nodes = vec![];
{
let mut i = 0;
loop {
let Some(node) = args.get(3 + i) else { break };
specified_nodes.push(node.clone());
i += 1;
}
}
let nodes = if specified_nodes.is_empty() { default_nodes } else { specified_nodes };
let rpc = |url: String| async move {
HttpRpc::new(url.clone())
.await
.unwrap_or_else(|_| panic!("couldn't create HttpRpc connected to {url}"))
};
let main_rpc = rpc(nodes[0].clone()).await;
let mut rpcs = vec![];
for i in 0 .. async_parallelism {
rpcs.push(Arc::new(rpc(nodes[i % nodes.len()].clone()).await));
}
let mut rpc_i = 0;
let mut handles: Vec<JoinHandle<()>> = vec![];
let mut height = 0;
loop {
let new_height = main_rpc.get_height().await.expect("couldn't call get_height");
if new_height == height {
break;
}
height = new_height;
while block_i < height {
if handles.len() >= async_parallelism {
// Guarantee one handle is complete
handles.swap_remove(0).await.unwrap();
// Remove all of the finished handles
let mut i = 0;
while i < handles.len() {
if handles[i].is_finished() {
handles.swap_remove(i).await.unwrap();
continue;
}
i += 1;
}
}
handles.push(tokio::spawn(check_block(rpcs[rpc_i].clone(), block_i)));
rpc_i = (rpc_i + 1) % rpcs.len();
block_i += 1;
}
}
}
#[cfg(not(feature = "binaries"))]
fn main() {
panic!("To run binaries, please build with `--feature binaries`.");
}

View File

@@ -1,11 +1,24 @@
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use crate::{serialize::*, transaction::Transaction};
use crate::{
hash,
merkle::merkle_root,
serialize::*,
transaction::{Input, Transaction},
};
const CORRECT_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("426d16cff04c71f8b16340b722dc4010a2dd3831c22041431f772547ba6e331a");
const EXISTING_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("bbd604d2ba11ba27935e006ed39c9bfdd99b76bf4a50654bc1e1e61217962698");
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BlockHeader {
pub major_version: u64,
pub minor_version: u64,
pub major_version: u8,
pub minor_version: u8,
pub timestamp: u64,
pub previous: [u8; 32],
pub nonce: u32,
@@ -45,16 +58,55 @@ pub struct Block {
}
impl Block {
pub fn number(&self) -> usize {
match self.miner_tx.prefix.inputs.first() {
Some(Input::Gen(number)) => (*number).try_into().unwrap(),
_ => panic!("invalid block, miner TX didn't have a Input::Gen"),
}
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.header.write(w)?;
self.miner_tx.write(w)?;
write_varint(&self.txs.len().try_into().unwrap(), w)?;
write_varint(&self.txs.len(), w)?;
for tx in &self.txs {
w.write_all(tx)?;
}
Ok(())
}
fn tx_merkle_root(&self) -> [u8; 32] {
merkle_root(self.miner_tx.hash(), &self.txs)
}
/// Serialize the block as required for the proof of work hash.
///
/// This is distinct from the serialization required for the block hash. To get the block hash,
/// use the [`Block::hash`] function.
pub fn serialize_hashable(&self) -> Vec<u8> {
let mut blob = self.header.serialize();
blob.extend_from_slice(&self.tx_merkle_root());
write_varint(&(1 + u64::try_from(self.txs.len()).unwrap()), &mut blob).unwrap();
blob
}
pub fn hash(&self) -> [u8; 32] {
let mut hashable = self.serialize_hashable();
// Monero pre-appends a VarInt of the block hashing blobs length before getting the block hash
// but doesn't do this when getting the proof of work hash :)
let mut hashing_blob = Vec::with_capacity(8 + hashable.len());
write_varint(&u64::try_from(hashable.len()).unwrap(), &mut hashing_blob).unwrap();
hashing_blob.append(&mut hashable);
let hash = hash(&hashing_blob);
if hash == CORRECT_BLOCK_HASH_202612 {
return EXISTING_BLOCK_HASH_202612;
};
hash
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
@@ -65,7 +117,7 @@ impl Block {
Ok(Block {
header: BlockHeader::read(r)?,
miner_tx: Transaction::read(r)?,
txs: (0 .. read_varint(r)?).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
txs: (0_usize .. read_varint(r)?).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
})
}
}

View File

@@ -1,42 +1,37 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
//! A modern Monero transaction library intended for usage in wallets. It prides
//! itself on accuracy, correctness, and removing common pit falls developers may
//! face.
#[cfg(not(feature = "std"))]
#[macro_use]
extern crate alloc;
//! monero-serai contains safety features, such as first-class acknowledgement of
//! the burning bug, yet also a high level API around creating transactions.
//! monero-serai also offers a FROST-based multisig, which is orders of magnitude
//! more performant than Monero's.
use std_shims::{sync::OnceLock, io};
//! monero-serai was written for Serai, a decentralized exchange aiming to support
//! Monero. Despite this, monero-serai is intended to be a widely usable library,
//! accurate to Monero. monero-serai guarantees the functionality needed for Serai,
//! yet will not deprive functionality from other users, and may potentially leave
//! Serai's umbrella at some point.
//! Various legacy transaction formats are not currently implemented, yet
//! monero-serai is still increasing its support for various transaction types.
use lazy_static::lazy_static;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use sha3::{Digest, Keccak256};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
edwards::{EdwardsPoint, EdwardsBasepointTable},
};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub use monero_generators::H;
mod merkle;
mod serialize;
use serialize::{read_byte, read_u16};
/// UnreducedScalar struct with functionality for recovering incorrectly reduced scalars.
mod unreduced_scalar;
/// Ring Signature structs and functionality.
pub mod ring_signatures;
/// RingCT structs and functionality.
pub mod ringct;
use ringct::RctType;
/// Transaction structs.
pub mod transaction;
@@ -51,22 +46,34 @@ pub mod wallet;
#[cfg(test)]
mod tests;
/// Monero protocol version. v15 is omitted as v15 was simply v14 and v16 being active at the same
/// time, with regards to the transactions supported. Accordingly, v16 should be used during v15.
static INV_EIGHT_CELL: OnceLock<Scalar> = OnceLock::new();
#[allow(non_snake_case)]
pub(crate) fn INV_EIGHT() -> Scalar {
*INV_EIGHT_CELL.get_or_init(|| Scalar::from(8u8).invert())
}
/// Monero protocol version.
///
/// v15 is omitted as v15 was simply v14 and v16 being active at the same time, with regards to the
/// transactions supported. Accordingly, v16 should be used during v15.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[allow(non_camel_case_types)]
pub enum Protocol {
Unsupported(usize),
v14,
v16,
Custom { ring_len: usize, bp_plus: bool },
Custom {
ring_len: usize,
bp_plus: bool,
optimal_rct_type: RctType,
view_tags: bool,
v16_fee: bool,
},
}
impl Protocol {
/// Amount of ring members under this protocol version.
pub fn ring_len(&self) -> usize {
match self {
Protocol::Unsupported(_) => panic!("Unsupported protocol version"),
Protocol::v14 => 11,
Protocol::v16 => 16,
Protocol::Custom { ring_len, .. } => *ring_len,
@@ -74,33 +81,115 @@ impl Protocol {
}
/// Whether or not the specified version uses Bulletproofs or Bulletproofs+.
///
/// This method will likely be reworked when versions not using Bulletproofs at all are added.
pub fn bp_plus(&self) -> bool {
match self {
Protocol::Unsupported(_) => panic!("Unsupported protocol version"),
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { bp_plus, .. } => *bp_plus,
}
}
}
lazy_static! {
static ref H_TABLE: EdwardsBasepointTable = EdwardsBasepointTable::create(&H);
// TODO: Make this an Option when we support pre-RCT protocols
pub fn optimal_rct_type(&self) -> RctType {
match self {
Protocol::v14 => RctType::Clsag,
Protocol::v16 => RctType::BulletproofsPlus,
Protocol::Custom { optimal_rct_type, .. } => *optimal_rct_type,
}
}
/// Whether or not the specified version uses view tags.
pub fn view_tags(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { view_tags, .. } => *view_tags,
}
}
/// Whether or not the specified version uses the fee algorithm from Monero
/// hard fork version 16 (released in v18 binaries).
pub fn v16_fee(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { v16_fee, .. } => *v16_fee,
}
}
pub(crate) fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Protocol::v14 => w.write_all(&[0, 14]),
Protocol::v16 => w.write_all(&[0, 16]),
Protocol::Custom { ring_len, bp_plus, optimal_rct_type, view_tags, v16_fee } => {
// Custom, version 0
w.write_all(&[1, 0])?;
w.write_all(&u16::try_from(*ring_len).unwrap().to_le_bytes())?;
w.write_all(&[u8::from(*bp_plus)])?;
w.write_all(&[optimal_rct_type.to_byte()])?;
w.write_all(&[u8::from(*view_tags)])?;
w.write_all(&[u8::from(*v16_fee)])
}
}
}
pub(crate) fn read<R: io::Read>(r: &mut R) -> io::Result<Protocol> {
Ok(match read_byte(r)? {
// Monero protocol
0 => match read_byte(r)? {
14 => Protocol::v14,
16 => Protocol::v16,
_ => Err(io::Error::other("unrecognized monero protocol"))?,
},
// Custom
1 => match read_byte(r)? {
0 => Protocol::Custom {
ring_len: read_u16(r)?.into(),
bp_plus: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
optimal_rct_type: RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::other("invalid RctType serialization"))?,
view_tags: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
v16_fee: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
},
_ => Err(io::Error::other("unrecognized custom protocol serialization"))?,
},
_ => Err(io::Error::other("unrecognized protocol serialization"))?,
})
}
}
/// Transparent structure representing a Pedersen commitment's contents.
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub struct Commitment {
pub mask: Scalar,
pub amount: u64,
}
impl core::fmt::Debug for Commitment {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt.debug_struct("Commitment").field("amount", &self.amount).finish_non_exhaustive()
}
}
impl Commitment {
/// The zero commitment, defined as a mask of 1 (as to not be the identity) and a 0 amount.
/// A commitment to zero, defined with a mask of 1 (as to not be the identity).
pub fn zero() -> Commitment {
Commitment { mask: Scalar::one(), amount: 0 }
Commitment { mask: Scalar::ONE, amount: 0 }
}
pub fn new(mask: Scalar, amount: u64) -> Commitment {
@@ -109,7 +198,7 @@ impl Commitment {
/// Calculate a Pedersen commitment, as a point, from the transparent structure.
pub fn calculate(&self) -> EdwardsPoint {
(&self.mask * &ED25519_BASEPOINT_TABLE) + (&Scalar::from(self.amount) * &*H_TABLE)
(&self.mask * ED25519_BASEPOINT_TABLE) + (Scalar::from(self.amount) * H())
}
}
@@ -131,6 +220,6 @@ pub fn hash_to_scalar(data: &[u8]) -> Scalar {
// This library acknowledges its practical impossibility of it occurring, and doesn't bother to
// code in logic to handle it. That said, if it ever occurs, something must happen in order to
// not generate/verify a proof we believe to be valid when it isn't
assert!(scalar != Scalar::zero(), "ZERO HASH: {data:?}");
assert!(scalar != Scalar::ZERO, "ZERO HASH: {data:?}");
scalar
}

View File

@@ -0,0 +1,55 @@
use std_shims::vec::Vec;
use crate::hash;
pub(crate) fn merkle_root(root: [u8; 32], leafs: &[[u8; 32]]) -> [u8; 32] {
match leafs.len() {
0 => root,
1 => hash(&[root, leafs[0]].concat()),
_ => {
let mut hashes = Vec::with_capacity(1 + leafs.len());
hashes.push(root);
hashes.extend(leafs);
// Monero preprocess this so the length is a power of 2
let mut high_pow_2 = 4; // 4 is the lowest value this can be
while high_pow_2 < hashes.len() {
high_pow_2 *= 2;
}
let low_pow_2 = high_pow_2 / 2;
// Merge right-most hashes until we're at the low_pow_2
{
let overage = hashes.len() - low_pow_2;
let mut rightmost = hashes.drain((low_pow_2 - overage) ..);
// This is true since we took overage from beneath and above low_pow_2, taking twice as
// many elements as overage
debug_assert_eq!(rightmost.len() % 2, 0);
let mut paired_hashes = Vec::with_capacity(overage);
while let Some(left) = rightmost.next() {
let right = rightmost.next().unwrap();
paired_hashes.push(hash(&[left.as_ref(), &right].concat()));
}
drop(rightmost);
hashes.extend(paired_hashes);
assert_eq!(hashes.len(), low_pow_2);
}
// Do a traditional pairing off
let mut new_hashes = Vec::with_capacity(hashes.len() / 2);
while hashes.len() > 1 {
let mut i = 0;
while i < hashes.len() {
new_hashes.push(hash(&[hashes[i], hashes[i + 1]].concat()));
i += 2;
}
hashes = new_hashes;
new_hashes = Vec::with_capacity(hashes.len() / 2);
}
hashes[0]
}
}
}

View File

@@ -0,0 +1,72 @@
use std_shims::{
io::{self, *},
vec::Vec,
};
use zeroize::Zeroize;
use curve25519_dalek::{EdwardsPoint, Scalar};
use monero_generators::hash_to_point;
use crate::{serialize::*, hash_to_scalar};
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Signature {
c: Scalar,
r: Scalar,
}
impl Signature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_scalar(&self.c, w)?;
write_scalar(&self.r, w)?;
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Signature> {
Ok(Signature { c: read_scalar(r)?, r: read_scalar(r)? })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct RingSignature {
sigs: Vec<Signature>,
}
impl RingSignature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for sig in &self.sigs {
sig.write(w)?;
}
Ok(())
}
pub fn read<R: Read>(members: usize, r: &mut R) -> io::Result<RingSignature> {
Ok(RingSignature { sigs: read_raw_vec(Signature::read, members, r)? })
}
pub fn verify(&self, msg: &[u8; 32], ring: &[EdwardsPoint], key_image: &EdwardsPoint) -> bool {
if ring.len() != self.sigs.len() {
return false;
}
let mut buf = Vec::with_capacity(32 + (32 * 2 * ring.len()));
buf.extend_from_slice(msg);
let mut sum = Scalar::ZERO;
for (ring_member, sig) in ring.iter().zip(&self.sigs) {
#[allow(non_snake_case)]
let Li = EdwardsPoint::vartime_double_scalar_mul_basepoint(&sig.c, ring_member, &sig.r);
buf.extend_from_slice(Li.compress().as_bytes());
#[allow(non_snake_case)]
let Ri = (sig.r * hash_to_point(ring_member.compress().to_bytes())) + (sig.c * key_image);
buf.extend_from_slice(Ri.compress().as_bytes());
sum += sig.c;
}
sum == hash_to_scalar(&buf)
}
}

View File

@@ -0,0 +1,97 @@
use core::fmt::Debug;
use std_shims::io::{self, Read, Write};
use curve25519_dalek::{traits::Identity, Scalar, EdwardsPoint};
use monero_generators::H_pow_2;
use crate::{hash_to_scalar, unreduced_scalar::UnreducedScalar, serialize::*};
/// 64 Borromean ring signatures.
///
/// s0 and s1 are stored as `UnreducedScalar`s due to Monero not requiring they were reduced.
/// `UnreducedScalar` preserves their original byte encoding and implements a custom reduction
/// algorithm which was in use.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanSignatures {
pub s0: [UnreducedScalar; 64],
pub s1: [UnreducedScalar; 64],
pub ee: Scalar,
}
impl BorromeanSignatures {
pub fn read<R: Read>(r: &mut R) -> io::Result<BorromeanSignatures> {
Ok(BorromeanSignatures {
s0: read_array(UnreducedScalar::read, r)?,
s1: read_array(UnreducedScalar::read, r)?,
ee: read_scalar(r)?,
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for s0 in &self.s0 {
s0.write(w)?;
}
for s1 in &self.s1 {
s1.write(w)?;
}
write_scalar(&self.ee, w)
}
fn verify(&self, keys_a: &[EdwardsPoint], keys_b: &[EdwardsPoint]) -> bool {
let mut transcript = [0; 2048];
for i in 0 .. 64 {
#[allow(non_snake_case)]
let LL = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&self.ee,
&keys_a[i],
&self.s0[i].recover_monero_slide_scalar(),
);
#[allow(non_snake_case)]
let LV = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&hash_to_scalar(LL.compress().as_bytes()),
&keys_b[i],
&self.s1[i].recover_monero_slide_scalar(),
);
transcript[(i * 32) .. ((i + 1) * 32)].copy_from_slice(LV.compress().as_bytes());
}
hash_to_scalar(&transcript) == self.ee
}
}
/// A range proof premised on Borromean ring signatures.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanRange {
pub sigs: BorromeanSignatures,
pub bit_commitments: [EdwardsPoint; 64],
}
impl BorromeanRange {
pub fn read<R: Read>(r: &mut R) -> io::Result<BorromeanRange> {
Ok(BorromeanRange {
sigs: BorromeanSignatures::read(r)?,
bit_commitments: read_array(read_point, r)?,
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.sigs.write(w)?;
write_raw_vec(write_point, &self.bit_commitments, w)
}
pub fn verify(&self, commitment: &EdwardsPoint) -> bool {
if &self.bit_commitments.iter().sum::<EdwardsPoint>() != commitment {
return false;
}
#[allow(non_snake_case)]
let H_pow_2 = H_pow_2();
let mut commitments_sub_one = [EdwardsPoint::identity(); 64];
for i in 0 .. 64 {
commitments_sub_one[i] = self.bit_commitments[i] - H_pow_2[i];
}
self.sigs.verify(&self.bit_commitments, &commitments_sub_one)
}
}

View File

@@ -1,7 +1,5 @@
// Required to be for this entire file, which isn't an issue, as it wouldn't bind to the static
#![allow(non_upper_case_globals)]
use std_shims::{vec::Vec, sync::OnceLock};
use lazy_static::lazy_static;
use rand_core::{RngCore, CryptoRng};
use subtle::{Choice, ConditionallySelectable};
@@ -15,13 +13,17 @@ use multiexp::multiexp as multiexp_const;
pub(crate) use monero_generators::Generators;
use crate::{H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
use crate::{INV_EIGHT as DALEK_INV_EIGHT, H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::*;
// Bring things into ff/group
lazy_static! {
pub(crate) static ref INV_EIGHT: Scalar = Scalar::from(8u8).invert().unwrap();
pub(crate) static ref H: EdwardsPoint = EdwardsPoint(*DALEK_H);
#[inline]
pub(crate) fn INV_EIGHT() -> Scalar {
Scalar(DALEK_INV_EIGHT())
}
#[inline]
pub(crate) fn H() -> EdwardsPoint {
EdwardsPoint(DALEK_H())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
@@ -34,7 +36,7 @@ pub(crate) const LOG_N: usize = 6; // 2 << 6 == N
pub(crate) const N: usize = 64;
pub(crate) fn prove_multiexp(pairs: &[(Scalar, EdwardsPoint)]) -> EdwardsPoint {
multiexp_const(pairs) * *INV_EIGHT
multiexp_const(pairs) * INV_EIGHT()
}
pub(crate) fn vector_exponent(
@@ -48,7 +50,7 @@ pub(crate) fn vector_exponent(
pub(crate) fn hash_cache(cache: &mut Scalar, mash: &[[u8; 32]]) -> Scalar {
let slice =
&[cache.to_bytes().as_ref(), mash.iter().cloned().flatten().collect::<Vec<_>>().as_ref()]
&[cache.to_bytes().as_ref(), mash.iter().copied().flatten().collect::<Vec<_>>().as_ref()]
.concat();
*cache = hash_to_scalar(slice);
*cache
@@ -80,8 +82,8 @@ pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, Scalar
if j < sv.len() {
bit = Choice::from((sv[j][i / 8] >> (i % 8)) & 1);
}
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::zero(), &Scalar::one(), bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::one(), &Scalar::zero(), bit);
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::ZERO, &Scalar::ONE, bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::ONE, &Scalar::ZERO, bit);
}
}
@@ -91,7 +93,7 @@ pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, Scalar
pub(crate) fn hash_commitments<C: IntoIterator<Item = DalekPoint>>(
commitments: C,
) -> (Scalar, Vec<EdwardsPoint>) {
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * *INV_EIGHT).collect::<Vec<_>>();
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * INV_EIGHT()).collect::<Vec<_>>();
(hash_to_scalar(&V.iter().flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>()), V)
}
@@ -102,7 +104,7 @@ pub(crate) fn alpha_rho<R: RngCore + CryptoRng>(
aR: &ScalarVector,
) -> (Scalar, EdwardsPoint) {
let ar = Scalar::random(rng);
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * *INV_EIGHT)
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * INV_EIGHT())
}
pub(crate) fn LR_statements(
@@ -116,20 +118,21 @@ pub(crate) fn LR_statements(
let mut res = a
.0
.iter()
.cloned()
.zip(G_i.iter().cloned())
.chain(b.0.iter().cloned().zip(H_i.iter().cloned()))
.copied()
.zip(G_i.iter().copied())
.chain(b.0.iter().copied().zip(H_i.iter().copied()))
.collect::<Vec<_>>();
res.push((cL, U));
res
}
lazy_static! {
pub(crate) static ref TWO_N: ScalarVector = ScalarVector::powers(Scalar::from(2u8), N);
static TWO_N_CELL: OnceLock<ScalarVector> = OnceLock::new();
pub(crate) fn TWO_N() -> &'static ScalarVector {
TWO_N_CELL.get_or_init(|| ScalarVector::powers(Scalar::from(2u8), N))
}
pub(crate) fn challenge_products(w: &[Scalar], winv: &[Scalar]) -> Vec<Scalar> {
let mut products = vec![Scalar::zero(); 1 << w.len()];
let mut products = vec![Scalar::ZERO; 1 << w.len()];
products[0] = winv[0];
products[1] = w[0];
for j in 1 .. w.len() {

View File

@@ -1,6 +1,9 @@
#![allow(non_snake_case)]
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
@@ -16,12 +19,10 @@ pub(crate) mod core;
use self::core::LOG_N;
pub(crate) mod original;
pub use original::GENERATORS as BULLETPROOFS_GENERATORS;
pub(crate) mod plus;
pub use plus::GENERATORS as BULLETPROOFS_PLUS_GENERATORS;
use self::original::OriginalStruct;
pub(crate) use self::original::OriginalStruct;
pub(crate) use self::plus::PlusStruct;
pub(crate) mod plus;
use self::plus::*;
pub(crate) const MAX_OUTPUTS: usize = self::core::MAX_M;
@@ -30,27 +31,45 @@ pub(crate) const MAX_OUTPUTS: usize = self::core::MAX_M;
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Bulletproofs {
Original(OriginalStruct),
Plus(PlusStruct),
Plus(AggregateRangeProof),
}
impl Bulletproofs {
pub(crate) fn fee_weight(plus: bool, outputs: usize) -> usize {
let fields = if plus { 6 } else { 9 };
fn bp_fields(plus: bool) -> usize {
if plus {
6
} else {
9
}
}
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L106-L124
pub(crate) fn calculate_bp_clawback(plus: bool, n_outputs: usize) -> (usize, usize) {
#[allow(non_snake_case)]
let mut LR_len = usize::try_from(usize::BITS - (outputs - 1).leading_zeros()).unwrap();
let padded_outputs = 1 << LR_len;
let mut LR_len = 0;
let mut n_padded_outputs = 1;
while n_padded_outputs < n_outputs {
LR_len += 1;
n_padded_outputs = 1 << LR_len;
}
LR_len += LOG_N;
let len = (fields + (2 * LR_len)) * 32;
len +
if padded_outputs <= 2 {
0
} else {
let base = ((fields + (2 * (LOG_N + 1))) * 32) / 2;
let size = (fields + (2 * LR_len)) * 32;
((base * padded_outputs) - size) * 4 / 5
}
let mut bp_clawback = 0;
if n_padded_outputs > 2 {
let fields = Bulletproofs::bp_fields(plus);
let base = ((fields + (2 * (LOG_N + 1))) * 32) / 2;
let size = (fields + (2 * LR_len)) * 32;
bp_clawback = ((base * n_padded_outputs) - size) * 4 / 5;
}
(bp_clawback, LR_len)
}
pub(crate) fn fee_weight(plus: bool, outputs: usize) -> usize {
#[allow(non_snake_case)]
let (bp_clawback, LR_len) = Bulletproofs::calculate_bp_clawback(plus, outputs);
32 * (Bulletproofs::bp_fields(plus) + (2 * LR_len)) + 2 + bp_clawback
}
/// Prove the list of commitments are within [0 .. 2^64).
@@ -59,13 +78,22 @@ impl Bulletproofs {
outputs: &[Commitment],
plus: bool,
) -> Result<Bulletproofs, TransactionError> {
if outputs.is_empty() {
Err(TransactionError::NoOutputs)?;
}
if outputs.len() > MAX_OUTPUTS {
return Err(TransactionError::TooManyOutputs)?;
Err(TransactionError::TooManyOutputs)?;
}
Ok(if !plus {
Bulletproofs::Original(OriginalStruct::prove(rng, outputs))
} else {
Bulletproofs::Plus(PlusStruct::prove(rng, outputs))
use dalek_ff_group::EdwardsPoint as DfgPoint;
Bulletproofs::Plus(
AggregateRangeStatement::new(outputs.iter().map(|com| DfgPoint(com.calculate())).collect())
.unwrap()
.prove(rng, AggregateRangeWitness::new(outputs).unwrap())
.unwrap(),
)
})
}
@@ -74,7 +102,22 @@ impl Bulletproofs {
pub fn verify<R: RngCore + CryptoRng>(&self, rng: &mut R, commitments: &[EdwardsPoint]) -> bool {
match self {
Bulletproofs::Original(bp) => bp.verify(rng, commitments),
Bulletproofs::Plus(bp) => bp.verify(rng, commitments),
Bulletproofs::Plus(bp) => {
let mut verifier = BatchVerifier::new(1);
// If this commitment is torsioned (which is allowed), this won't be a well-formed
// dfg::EdwardsPoint (expected to be of prime-order)
// The actual BP+ impl will perform a torsion clear though, making this safe
// TODO: Have AggregateRangeStatement take in dalek EdwardsPoint for clarity on this
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
if !statement.verify(rng, &mut verifier, (), bp.clone()) {
return false;
}
verifier.verify_vartime()
}
}
}
@@ -91,7 +134,14 @@ impl Bulletproofs {
) -> bool {
match self {
Bulletproofs::Original(bp) => bp.batch_verify(rng, verifier, id, commitments),
Bulletproofs::Plus(bp) => bp.batch_verify(rng, verifier, id, commitments),
Bulletproofs::Plus(bp) => {
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
statement.verify(rng, verifier, id, bp.clone())
}
}
}
@@ -116,14 +166,14 @@ impl Bulletproofs {
}
Bulletproofs::Plus(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.A1, w)?;
write_point(&bp.B, w)?;
write_scalar(&bp.r1, w)?;
write_scalar(&bp.s1, w)?;
write_scalar(&bp.d1, w)?;
specific_write_vec(&bp.L, w)?;
specific_write_vec(&bp.R, w)
write_point(&bp.A.0, w)?;
write_point(&bp.wip.A.0, w)?;
write_point(&bp.wip.B.0, w)?;
write_scalar(&bp.wip.r_answer.0, w)?;
write_scalar(&bp.wip.s_answer.0, w)?;
write_scalar(&bp.wip.delta_answer.0, w)?;
specific_write_vec(&bp.wip.L.iter().cloned().map(|L| L.0).collect::<Vec<_>>(), w)?;
specific_write_vec(&bp.wip.R.iter().cloned().map(|R| R.0).collect::<Vec<_>>(), w)
}
}
}
@@ -161,15 +211,19 @@ impl Bulletproofs {
/// Read Bulletproofs+.
pub fn read_plus<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
Ok(Bulletproofs::Plus(PlusStruct {
A: read_point(r)?,
A1: read_point(r)?,
B: read_point(r)?,
r1: read_scalar(r)?,
s1: read_scalar(r)?,
d1: read_scalar(r)?,
L: read_vec(read_point, r)?,
R: read_vec(read_point, r)?,
use dalek_ff_group::{Scalar as DfgScalar, EdwardsPoint as DfgPoint};
Ok(Bulletproofs::Plus(AggregateRangeProof {
A: DfgPoint(read_point(r)?),
wip: WipProof {
A: DfgPoint(read_point(r)?),
B: DfgPoint(read_point(r)?),
r_answer: DfgScalar(read_scalar(r)?),
s_answer: DfgScalar(read_scalar(r)?),
delta_answer: DfgScalar(read_scalar(r)?),
L: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
R: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
},
}))
}
}

View File

@@ -1,4 +1,5 @@
use lazy_static::lazy_static;
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
@@ -14,9 +15,9 @@ use crate::{Commitment, ringct::bulletproofs::core::*};
include!(concat!(env!("OUT_DIR"), "/generators.rs"));
lazy_static! {
static ref ONE_N: ScalarVector = ScalarVector(vec![Scalar::one(); N]);
static ref IP12: Scalar = inner_product(&ONE_N, &TWO_N);
static IP12_CELL: OnceLock<Scalar> = OnceLock::new();
pub(crate) fn IP12() -> Scalar {
*IP12_CELL.get_or_init(|| inner_product(&ScalarVector(vec![Scalar::ONE; N]), TWO_N()))
}
#[derive(Clone, PartialEq, Eq, Debug)]
@@ -48,8 +49,9 @@ impl OriginalStruct {
let (sL, sR) =
ScalarVector((0 .. (MN * 2)).map(|_| Scalar::random(&mut *rng)).collect::<Vec<_>>()).split();
let (mut alpha, A) = alpha_rho(&mut *rng, &GENERATORS, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, &GENERATORS, &sL, &sR);
let generators = GENERATORS();
let (mut alpha, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, generators, &sL, &sR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes(), S.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
@@ -62,7 +64,7 @@ impl OriginalStruct {
let zpow = ScalarVector::powers(z, M + 2);
for j in 0 .. M {
for i in 0 .. N {
zero_twos.push(zpow[j + 2] * TWO_N[i]);
zero_twos.push(zpow[j + 2] * TWO_N()[i]);
}
}
@@ -77,8 +79,8 @@ impl OriginalStruct {
let mut tau1 = Scalar::random(&mut *rng);
let mut tau2 = Scalar::random(&mut *rng);
let T1 = prove_multiexp(&[(t1, *H), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, *H), (tau2, EdwardsPoint::generator())]);
let T1 = prove_multiexp(&[(t1, H()), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, H()), (tau2, EdwardsPoint::generator())]);
let x =
hash_cache(&mut cache, &[z.to_bytes(), T1.compress().to_bytes(), T2.compress().to_bytes()]);
@@ -112,10 +114,10 @@ impl OriginalStruct {
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = GENERATORS.G[.. a.len()].to_vec();
let mut H_proof = GENERATORS.H[.. a.len()].to_vec();
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
H_proof.iter_mut().zip(yinvpow.0.iter()).for_each(|(this_H, yinvpow)| *this_H *= yinvpow);
let U = *H * x_ip;
let U = H() * x_ip;
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
@@ -188,7 +190,7 @@ impl OriginalStruct {
}
// Rebuild all challenges
let (mut cache, commitments) = hash_commitments(commitments.iter().cloned());
let (mut cache, commitments) = hash_commitments(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes(), self.S.compress().to_bytes()]);
let z = hash_to_scalar(&y.to_bytes());
@@ -230,10 +232,10 @@ impl OriginalStruct {
let ip1y = ScalarVector::powers(y, M * N).sum();
let mut k = -(zpow[2] * ip1y);
for j in 1 ..= M {
k -= zpow[j + 2] * *IP12;
k -= zpow[j + 2] * IP12();
}
let y1 = Scalar(self.t) - ((z * ip1y) + k);
proof.push((-y1, *H));
proof.push((-y1, H()));
proof.push((-Scalar(self.taux), G));
@@ -247,10 +249,10 @@ impl OriginalStruct {
proof = Vec::with_capacity(4 + (2 * (MN + logMN)));
let z3 = (Scalar(self.t) - (Scalar(self.a) * Scalar(self.b))) * x_ip;
proof.push((z3, *H));
proof.push((z3, H()));
proof.push((-Scalar(self.mu), G));
proof.push((Scalar::one(), A));
proof.push((Scalar::ONE, A));
proof.push((x, S));
{
@@ -260,13 +262,14 @@ impl OriginalStruct {
let w_cache = challenge_products(&w, &winv);
let generators = GENERATORS();
for i in 0 .. MN {
let g = (Scalar(self.a) * w_cache[i]) + z;
proof.push((-g, GENERATORS.G[i]));
proof.push((-g, generators.G[i]));
let mut h = Scalar(self.b) * yinvpow[i] * w_cache[(!i) & (MN - 1)];
h -= ((zpow[(i / N) + 2] * TWO_N[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, GENERATORS.H[i]));
h -= ((zpow[(i / N) + 2] * TWO_N()[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, generators.H[i]));
}
}

View File

@@ -1,306 +0,0 @@
use lazy_static::lazy_static;
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
use curve25519_dalek::{scalar::Scalar as DalekScalar, edwards::EdwardsPoint as DalekPoint};
use group::ff::Field;
use dalek_ff_group::{ED25519_BASEPOINT_POINT as G, Scalar, EdwardsPoint};
use multiexp::BatchVerifier;
use crate::{
Commitment, hash,
ringct::{hash_to_point::raw_hash_to_point, bulletproofs::core::*},
};
include!(concat!(env!("OUT_DIR"), "/generators_plus.rs"));
lazy_static! {
static ref TRANSCRIPT: [u8; 32] =
EdwardsPoint(raw_hash_to_point(hash(b"bulletproof_plus_transcript"))).compress().to_bytes();
}
// TRANSCRIPT isn't a Scalar, so we need this alternative for the first hash
fn hash_plus<C: IntoIterator<Item = DalekPoint>>(commitments: C) -> (Scalar, Vec<EdwardsPoint>) {
let (cache, commitments) = hash_commitments(commitments);
(hash_to_scalar(&[&*TRANSCRIPT as &[u8], &cache.to_bytes()].concat()), commitments)
}
// d[j*N+i] = z**(2*(j+1)) * 2**i
fn d(z: Scalar, M: usize, MN: usize) -> (ScalarVector, ScalarVector) {
let zpow = ScalarVector::even_powers(z, 2 * M);
let mut d = vec![Scalar::zero(); MN];
for j in 0 .. M {
for i in 0 .. N {
d[(j * N) + i] = zpow[j] * TWO_N[i];
}
}
(zpow, ScalarVector(d))
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct PlusStruct {
pub(crate) A: DalekPoint,
pub(crate) A1: DalekPoint,
pub(crate) B: DalekPoint,
pub(crate) r1: DalekScalar,
pub(crate) s1: DalekScalar,
pub(crate) d1: DalekScalar,
pub(crate) L: Vec<DalekPoint>,
pub(crate) R: Vec<DalekPoint>,
}
impl PlusStruct {
pub(crate) fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
commitments: &[Commitment],
) -> PlusStruct {
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
let commitments_points = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
let (mut cache, _) = hash_plus(commitments_points.clone());
let (mut alpha1, A) = alpha_rho(&mut *rng, &GENERATORS, &aL, &aR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
let z = cache;
let (zpow, d) = d(z, M, MN);
let aL1 = aL - z;
let ypow = ScalarVector::powers(y, MN + 2);
let mut y_for_d = ScalarVector(ypow.0[1 ..= MN].to_vec());
y_for_d.0.reverse();
let aR1 = (aR + z) + (y_for_d * d);
for (j, gamma) in commitments.iter().map(|c| Scalar(c.mask)).enumerate() {
alpha1 += zpow[j] * ypow[MN + 1] * gamma;
}
let mut a = aL1;
let mut b = aR1;
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = GENERATORS.G[.. a.len()].to_vec();
let mut H_proof = GENERATORS.H[.. a.len()].to_vec();
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
while a.len() != 1 {
let (aL, aR) = a.split();
let (bL, bR) = b.split();
let cL = weighted_inner_product(&aL, &bR, y);
let cR = weighted_inner_product(&(&aR * ypow[aR.len()]), &bL, y);
let (mut dL, mut dR) = (Scalar::random(&mut *rng), Scalar::random(&mut *rng));
let (G_L, G_R) = G_proof.split_at(aL.len());
let (H_L, H_R) = H_proof.split_at(aL.len());
let mut L_i = LR_statements(&(&aL * yinvpow[aL.len()]), G_R, &bR, H_L, cL, *H);
L_i.push((dL, G));
let L_i = prove_multiexp(&L_i);
L.push(L_i);
let mut R_i = LR_statements(&(&aR * ypow[aR.len()]), G_L, &bL, H_R, cR, *H);
R_i.push((dR, G));
let R_i = prove_multiexp(&R_i);
R.push(R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
G_proof = hadamard_fold(G_L, G_R, winv, w * yinvpow[aL.len()]);
H_proof = hadamard_fold(H_L, H_R, w, winv);
a = (&aL * w) + (aR * (winv * ypow[aL.len()]));
b = (bL * winv) + (bR * w);
alpha1 += (dL * (w * w)) + (dR * (winv * winv));
dL.zeroize();
dR.zeroize();
}
let mut r = Scalar::random(&mut *rng);
let mut s = Scalar::random(&mut *rng);
let mut d = Scalar::random(&mut *rng);
let mut eta = Scalar::random(&mut *rng);
let A1 = prove_multiexp(&[
(r, G_proof[0]),
(s, H_proof[0]),
(d, G),
((r * y * b[0]) + (s * y * a[0]), *H),
]);
let B = prove_multiexp(&[(r * y * s, *H), (eta, G)]);
let e = hash_cache(&mut cache, &[A1.compress().to_bytes(), B.compress().to_bytes()]);
let r1 = (a[0] * e) + r;
r.zeroize();
let s1 = (b[0] * e) + s;
s.zeroize();
let d1 = ((d * e) + eta) + (alpha1 * (e * e));
d.zeroize();
eta.zeroize();
alpha1.zeroize();
let res = PlusStruct {
A: *A,
A1: *A1,
B: *B,
r1: *r1,
s1: *s1,
d1: *d1,
L: L.drain(..).map(|L| *L).collect(),
R: R.drain(..).map(|R| *R).collect(),
};
debug_assert!(res.verify(rng, &commitments_points));
res
}
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
// Verify commitments are valid
if commitments.is_empty() || (commitments.len() > MAX_M) {
return false;
}
// Verify L and R are properly sized
if self.L.len() != self.R.len() {
return false;
}
let (logMN, M, MN) = MN(commitments.len());
if self.L.len() != logMN {
return false;
}
// Rebuild all challenges
let (mut cache, commitments) = hash_plus(commitments.iter().cloned());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes()]);
let yinv = y.invert().unwrap();
let z = hash_to_scalar(&y.to_bytes());
cache = z;
let mut w = Vec::with_capacity(logMN);
let mut winv = Vec::with_capacity(logMN);
for (L, R) in self.L.iter().zip(&self.R) {
w.push(hash_cache(&mut cache, &[L.compress().to_bytes(), R.compress().to_bytes()]));
winv.push(cache.invert().unwrap());
}
let e = hash_cache(&mut cache, &[self.A1.compress().to_bytes(), self.B.compress().to_bytes()]);
// Convert the proof from * INV_EIGHT to its actual form
let normalize = |point: &DalekPoint| EdwardsPoint(point.mul_by_cofactor());
let L = self.L.iter().map(normalize).collect::<Vec<_>>();
let R = self.R.iter().map(normalize).collect::<Vec<_>>();
let A = normalize(&self.A);
let A1 = normalize(&self.A1);
let B = normalize(&self.B);
let mut commitments = commitments.iter().map(|c| c.mul_by_cofactor()).collect::<Vec<_>>();
// Verify it
let mut proof = Vec::with_capacity(logMN + 5 + (2 * (MN + logMN)));
let mut yMN = y;
for _ in 0 .. logMN {
yMN *= yMN;
}
let yMNy = yMN * y;
let (zpow, d) = d(z, M, MN);
let zsq = zpow[0];
let esq = e * e;
let minus_esq = -esq;
let commitment_weight = minus_esq * yMNy;
for (i, commitment) in commitments.drain(..).enumerate() {
proof.push((commitment_weight * zpow[i], commitment));
}
// Invert B, instead of the Scalar, as the latter is only 2x as expensive yet enables reduction
// to a single addition under vartime for the first BP verified in the batch, which is expected
// to be much more significant
proof.push((Scalar::one(), -B));
proof.push((-e, A1));
proof.push((minus_esq, A));
proof.push((Scalar(self.d1), G));
let d_sum = zpow.sum() * Scalar::from(u64::MAX);
let y_sum = weighted_powers(y, MN).sum();
proof.push((
Scalar(self.r1 * y.0 * self.s1) + (esq * ((yMNy * z * d_sum) + ((zsq - z) * y_sum))),
*H,
));
let w_cache = challenge_products(&w, &winv);
let mut e_r1_y = e * Scalar(self.r1);
let e_s1 = e * Scalar(self.s1);
let esq_z = esq * z;
let minus_esq_z = -esq_z;
let mut minus_esq_y = minus_esq * yMN;
for i in 0 .. MN {
proof.push((e_r1_y * w_cache[i] + esq_z, GENERATORS.G[i]));
proof.push((
(e_s1 * w_cache[(!i) & (MN - 1)]) + minus_esq_z + (minus_esq_y * d[i]),
GENERATORS.H[i],
));
e_r1_y *= yinv;
minus_esq_y *= yinv;
}
for i in 0 .. logMN {
proof.push((minus_esq * w[i] * w[i], L[i]));
proof.push((minus_esq * winv[i] * winv[i], R[i]));
}
verifier.queue(rng, id, proof);
true
}
#[must_use]
pub(crate) fn verify<R: RngCore + CryptoRng>(
&self,
rng: &mut R,
commitments: &[DalekPoint],
) -> bool {
let mut verifier = BatchVerifier::new(1);
if self.verify_core(rng, &mut verifier, (), commitments) {
verifier.verify_vartime()
} else {
false
}
}
#[must_use]
pub(crate) fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
self.verify_core(rng, verifier, id, commitments)
}
}

View File

@@ -0,0 +1,249 @@
use std_shims::vec::Vec;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use multiexp::{multiexp, multiexp_vartime, BatchVerifier};
use group::{
ff::{Field, PrimeField},
Group, GroupEncoding,
};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::{
Commitment,
ringct::{
bulletproofs::core::{MAX_M, N},
bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators,
transcript::*,
weighted_inner_product::{WipStatement, WipWitness, WipProof},
padded_pow_of_2, u64_decompose,
},
},
};
// Figure 3
#[derive(Clone, Debug)]
pub(crate) struct AggregateRangeStatement {
generators: Generators,
V: Vec<EdwardsPoint>,
}
impl Zeroize for AggregateRangeStatement {
fn zeroize(&mut self) {
self.V.zeroize();
}
}
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct AggregateRangeWitness {
values: Vec<u64>,
gammas: Vec<Scalar>,
}
impl AggregateRangeWitness {
pub(crate) fn new(commitments: &[Commitment]) -> Option<Self> {
if commitments.is_empty() || (commitments.len() > MAX_M) {
return None;
}
let mut values = Vec::with_capacity(commitments.len());
let mut gammas = Vec::with_capacity(commitments.len());
for commitment in commitments {
values.push(commitment.amount);
gammas.push(Scalar(commitment.mask));
}
Some(AggregateRangeWitness { values, gammas })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct AggregateRangeProof {
pub(crate) A: EdwardsPoint,
pub(crate) wip: WipProof,
}
impl AggregateRangeStatement {
pub(crate) fn new(V: Vec<EdwardsPoint>) -> Option<Self> {
if V.is_empty() || (V.len() > MAX_M) {
return None;
}
Some(Self { generators: Generators::new(), V })
}
fn transcript_A(transcript: &mut Scalar, A: EdwardsPoint) -> (Scalar, Scalar) {
let y = hash_to_scalar(&[transcript.to_repr().as_ref(), A.to_bytes().as_ref()].concat());
let z = hash_to_scalar(y.to_bytes().as_ref());
*transcript = z;
(y, z)
}
fn d_j(j: usize, m: usize) -> ScalarVector {
let mut d_j = Vec::with_capacity(m * N);
for _ in 0 .. (j - 1) * N {
d_j.push(Scalar::ZERO);
}
d_j.append(&mut ScalarVector::powers(Scalar::from(2u8), N).0);
for _ in 0 .. (m - j) * N {
d_j.push(Scalar::ZERO);
}
ScalarVector(d_j)
}
fn compute_A_hat(
mut V: PointVector,
generators: &Generators,
transcript: &mut Scalar,
mut A: EdwardsPoint,
) -> (Scalar, ScalarVector, Scalar, Scalar, ScalarVector, EdwardsPoint) {
let (y, z) = Self::transcript_A(transcript, A);
A = A.mul_by_cofactor();
while V.len() < padded_pow_of_2(V.len()) {
V.0.push(EdwardsPoint::identity());
}
let mn = V.len() * N;
let mut z_pow = Vec::with_capacity(V.len());
let mut d = ScalarVector::new(mn);
for j in 1 ..= V.len() {
z_pow.push(z.pow(Scalar::from(2 * u64::try_from(j).unwrap()))); // TODO: Optimize this
d = d.add_vec(&Self::d_j(j, V.len()).mul(z_pow[j - 1]));
}
let mut ascending_y = ScalarVector(vec![y]);
for i in 1 .. d.len() {
ascending_y.0.push(ascending_y[i - 1] * y);
}
let y_pows = ascending_y.clone().sum();
let mut descending_y = ascending_y.clone();
descending_y.0.reverse();
let d_descending_y = d.mul_vec(&descending_y);
let y_mn_plus_one = descending_y[0] * y;
let mut commitment_accum = EdwardsPoint::identity();
for (j, commitment) in V.0.iter().enumerate() {
commitment_accum += *commitment * z_pow[j];
}
let neg_z = -z;
let mut A_terms = Vec::with_capacity((generators.len() * 2) + 2);
for (i, d_y_z) in d_descending_y.add(z).0.drain(..).enumerate() {
A_terms.push((neg_z, generators.generator(GeneratorsList::GBold1, i)));
A_terms.push((d_y_z, generators.generator(GeneratorsList::HBold1, i)));
}
A_terms.push((y_mn_plus_one, commitment_accum));
A_terms.push((
((y_pows * z) - (d.sum() * y_mn_plus_one * z) - (y_pows * z.square())),
generators.g(),
));
(y, d_descending_y, y_mn_plus_one, z, ScalarVector(z_pow), A + multiexp_vartime(&A_terms))
}
pub(crate) fn prove<R: RngCore + CryptoRng>(
self,
rng: &mut R,
witness: AggregateRangeWitness,
) -> Option<AggregateRangeProof> {
// Check for consistency with the witness
if self.V.len() != witness.values.len() {
return None;
}
for (commitment, (value, gamma)) in
self.V.iter().zip(witness.values.iter().zip(witness.gammas.iter()))
{
if Commitment::new(**gamma, *value).calculate() != **commitment {
return None;
}
}
let Self { generators, V } = self;
// Monero expects all of these points to be torsion-free
// Generally, for Bulletproofs, it sends points * INV_EIGHT and then performs a torsion clear
// by multiplying by 8
// This also restores the original value due to the preprocessing
// Commitments aren't transmitted INV_EIGHT though, so this multiplies by INV_EIGHT to enable
// clearing its cofactor without mutating the value
// For some reason, these values are transcripted * INV_EIGHT, not as transmitted
let mut V = V.into_iter().map(|V| EdwardsPoint(V.0 * crate::INV_EIGHT())).collect::<Vec<_>>();
let mut transcript = initial_transcript(V.iter());
V.iter_mut().for_each(|V| *V = V.mul_by_cofactor());
// Pad V
while V.len() < padded_pow_of_2(V.len()) {
V.push(EdwardsPoint::identity());
}
let generators = generators.reduce(V.len() * N);
let mut d_js = Vec::with_capacity(V.len());
let mut a_l = ScalarVector(Vec::with_capacity(V.len() * N));
for j in 1 ..= V.len() {
d_js.push(Self::d_j(j, V.len()));
a_l.0.append(&mut u64_decompose(*witness.values.get(j - 1).unwrap_or(&0)).0);
}
let a_r = a_l.sub(Scalar::ONE);
let alpha = Scalar::random(&mut *rng);
let mut A_terms = Vec::with_capacity((generators.len() * 2) + 1);
for (i, a_l) in a_l.0.iter().enumerate() {
A_terms.push((*a_l, generators.generator(GeneratorsList::GBold1, i)));
}
for (i, a_r) in a_r.0.iter().enumerate() {
A_terms.push((*a_r, generators.generator(GeneratorsList::HBold1, i)));
}
A_terms.push((alpha, generators.h()));
let mut A = multiexp(&A_terms);
A_terms.zeroize();
// Multiply by INV_EIGHT per earlier commentary
A.0 *= crate::INV_EIGHT();
let (y, d_descending_y, y_mn_plus_one, z, z_pow, A_hat) =
Self::compute_A_hat(PointVector(V), &generators, &mut transcript, A);
let a_l = a_l.sub(z);
let a_r = a_r.add_vec(&d_descending_y).add(z);
let mut alpha = alpha;
for j in 1 ..= witness.gammas.len() {
alpha += z_pow[j - 1] * witness.gammas[j - 1] * y_mn_plus_one;
}
Some(AggregateRangeProof {
A,
wip: WipStatement::new(generators, A_hat, y)
.prove(rng, transcript, WipWitness::new(a_l, a_r, alpha).unwrap())
.unwrap(),
})
}
pub(crate) fn verify<Id: Copy + Zeroize, R: RngCore + CryptoRng>(
self,
rng: &mut R,
verifier: &mut BatchVerifier<Id, EdwardsPoint>,
id: Id,
proof: AggregateRangeProof,
) -> bool {
let Self { generators, V } = self;
let mut V = V.into_iter().map(|V| EdwardsPoint(V.0 * crate::INV_EIGHT())).collect::<Vec<_>>();
let mut transcript = initial_transcript(V.iter());
V.iter_mut().for_each(|V| *V = V.mul_by_cofactor());
let generators = generators.reduce(V.len() * N);
let (y, _, _, _, _, A_hat) =
Self::compute_A_hat(PointVector(V), &generators, &mut transcript, proof.A);
WipStatement::new(generators, A_hat, y).verify(rng, verifier, id, transcript, proof.wip)
}
}

View File

@@ -0,0 +1,92 @@
#![allow(non_snake_case)]
use group::Group;
use dalek_ff_group::{Scalar, EdwardsPoint};
mod scalar_vector;
pub(crate) use scalar_vector::{ScalarVector, weighted_inner_product};
mod point_vector;
pub(crate) use point_vector::PointVector;
pub(crate) mod transcript;
pub(crate) mod weighted_inner_product;
pub(crate) use weighted_inner_product::*;
pub(crate) mod aggregate_range_proof;
pub(crate) use aggregate_range_proof::*;
pub(crate) fn padded_pow_of_2(i: usize) -> usize {
let mut next_pow_of_2 = 1;
while next_pow_of_2 < i {
next_pow_of_2 <<= 1;
}
next_pow_of_2
}
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
pub(crate) enum GeneratorsList {
GBold1,
HBold1,
}
// TODO: Table these
#[derive(Clone, Debug)]
pub(crate) struct Generators {
g: EdwardsPoint,
g_bold1: &'static [EdwardsPoint],
h_bold1: &'static [EdwardsPoint],
}
mod generators {
use std_shims::sync::OnceLock;
use monero_generators::Generators;
include!(concat!(env!("OUT_DIR"), "/generators_plus.rs"));
}
impl Generators {
#[allow(clippy::new_without_default)]
pub(crate) fn new() -> Self {
let gens = generators::GENERATORS();
Generators { g: dalek_ff_group::EdwardsPoint(crate::H()), g_bold1: &gens.G, h_bold1: &gens.H }
}
pub(crate) fn len(&self) -> usize {
self.g_bold1.len()
}
pub(crate) fn g(&self) -> EdwardsPoint {
self.g
}
pub(crate) fn h(&self) -> EdwardsPoint {
EdwardsPoint::generator()
}
pub(crate) fn generator(&self, list: GeneratorsList, i: usize) -> EdwardsPoint {
match list {
GeneratorsList::GBold1 => self.g_bold1[i],
GeneratorsList::HBold1 => self.h_bold1[i],
}
}
pub(crate) fn reduce(&self, generators: usize) -> Self {
// Round to the nearest power of 2
let generators = padded_pow_of_2(generators);
assert!(generators <= self.g_bold1.len());
Generators {
g: self.g,
g_bold1: &self.g_bold1[.. generators],
h_bold1: &self.h_bold1[.. generators],
}
}
}
// Returns the little-endian decomposition.
fn u64_decompose(value: u64) -> ScalarVector {
let mut bits = ScalarVector::new(64);
for bit in 0 .. 64 {
bits[bit] = Scalar::from((value >> bit) & 1);
}
bits
}

View File

@@ -0,0 +1,50 @@
use core::ops::{Index, IndexMut};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
use dalek_ff_group::EdwardsPoint;
#[cfg(test)]
use multiexp::multiexp;
#[cfg(test)]
use crate::ringct::bulletproofs::plus::ScalarVector;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct PointVector(pub(crate) Vec<EdwardsPoint>);
impl Index<usize> for PointVector {
type Output = EdwardsPoint;
fn index(&self, index: usize) -> &EdwardsPoint {
&self.0[index]
}
}
impl IndexMut<usize> for PointVector {
fn index_mut(&mut self, index: usize) -> &mut EdwardsPoint {
&mut self.0[index]
}
}
impl PointVector {
#[cfg(test)]
pub(crate) fn multiexp(&self, vector: &ScalarVector) -> EdwardsPoint {
debug_assert_eq!(self.len(), vector.len());
let mut res = Vec::with_capacity(self.len());
for (point, scalar) in self.0.iter().copied().zip(vector.0.iter().copied()) {
res.push((scalar, point));
}
multiexp(&res)
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn split(mut self) -> (Self, Self) {
debug_assert!(self.len() > 1);
let r = self.0.split_off(self.0.len() / 2);
debug_assert_eq!(self.len(), r.len());
(self, PointVector(r))
}
}

View File

@@ -0,0 +1,114 @@
use core::{
borrow::Borrow,
ops::{Index, IndexMut},
};
use std_shims::vec::Vec;
use zeroize::Zeroize;
use group::ff::Field;
use dalek_ff_group::Scalar;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct ScalarVector(pub(crate) Vec<Scalar>);
impl Index<usize> for ScalarVector {
type Output = Scalar;
fn index(&self, index: usize) -> &Scalar {
&self.0[index]
}
}
impl IndexMut<usize> for ScalarVector {
fn index_mut(&mut self, index: usize) -> &mut Scalar {
&mut self.0[index]
}
}
impl ScalarVector {
pub(crate) fn new(len: usize) -> Self {
ScalarVector(vec![Scalar::ZERO; len])
}
pub(crate) fn add(&self, scalar: impl Borrow<Scalar>) -> Self {
let mut res = self.clone();
for val in res.0.iter_mut() {
*val += scalar.borrow();
}
res
}
pub(crate) fn sub(&self, scalar: impl Borrow<Scalar>) -> Self {
let mut res = self.clone();
for val in res.0.iter_mut() {
*val -= scalar.borrow();
}
res
}
pub(crate) fn mul(&self, scalar: impl Borrow<Scalar>) -> Self {
let mut res = self.clone();
for val in res.0.iter_mut() {
*val *= scalar.borrow();
}
res
}
pub(crate) fn add_vec(&self, vector: &Self) -> Self {
debug_assert_eq!(self.len(), vector.len());
let mut res = self.clone();
for (i, val) in res.0.iter_mut().enumerate() {
*val += vector.0[i];
}
res
}
pub(crate) fn mul_vec(&self, vector: &Self) -> Self {
debug_assert_eq!(self.len(), vector.len());
let mut res = self.clone();
for (i, val) in res.0.iter_mut().enumerate() {
*val *= vector.0[i];
}
res
}
pub(crate) fn inner_product(&self, vector: &Self) -> Scalar {
self.mul_vec(vector).sum()
}
pub(crate) fn powers(x: Scalar, len: usize) -> Self {
debug_assert!(len != 0);
let mut res = Vec::with_capacity(len);
res.push(Scalar::ONE);
res.push(x);
for i in 2 .. len {
res.push(res[i - 1] * x);
}
res.truncate(len);
ScalarVector(res)
}
pub(crate) fn sum(mut self) -> Scalar {
self.0.drain(..).sum()
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn split(mut self) -> (Self, Self) {
debug_assert!(self.len() > 1);
let r = self.0.split_off(self.0.len() / 2);
debug_assert_eq!(self.len(), r.len());
(self, ScalarVector(r))
}
}
pub(crate) fn weighted_inner_product(
a: &ScalarVector,
b: &ScalarVector,
y: &ScalarVector,
) -> Scalar {
a.inner_product(&b.mul_vec(y))
}

View File

@@ -0,0 +1,24 @@
use std_shims::{sync::OnceLock, vec::Vec};
use dalek_ff_group::{Scalar, EdwardsPoint};
use monero_generators::{hash_to_point as raw_hash_to_point};
use crate::{hash, hash_to_scalar as dalek_hash};
// Monero starts BP+ transcripts with the following constant.
static TRANSCRIPT_CELL: OnceLock<[u8; 32]> = OnceLock::new();
pub(crate) fn TRANSCRIPT() -> [u8; 32] {
// Why this uses a hash_to_point is completely unknown.
*TRANSCRIPT_CELL
.get_or_init(|| raw_hash_to_point(hash(b"bulletproof_plus_transcript")).compress().to_bytes())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
pub(crate) fn initial_transcript(commitments: core::slice::Iter<'_, EdwardsPoint>) -> Scalar {
let commitments_hash =
hash_to_scalar(&commitments.flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>());
hash_to_scalar(&[TRANSCRIPT().as_ref(), &commitments_hash.to_bytes()].concat())
}

View File

@@ -0,0 +1,447 @@
use std_shims::vec::Vec;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use multiexp::{multiexp, multiexp_vartime, BatchVerifier};
use group::{
ff::{Field, PrimeField},
GroupEncoding,
};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::ringct::bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators, padded_pow_of_2, weighted_inner_product,
transcript::*,
};
// Figure 1
#[derive(Clone, Debug)]
pub(crate) struct WipStatement {
generators: Generators,
P: EdwardsPoint,
y: ScalarVector,
}
impl Zeroize for WipStatement {
fn zeroize(&mut self) {
self.P.zeroize();
self.y.zeroize();
}
}
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct WipWitness {
a: ScalarVector,
b: ScalarVector,
alpha: Scalar,
}
impl WipWitness {
pub(crate) fn new(mut a: ScalarVector, mut b: ScalarVector, alpha: Scalar) -> Option<Self> {
if a.0.is_empty() || (a.len() != b.len()) {
return None;
}
// Pad to the nearest power of 2
let missing = padded_pow_of_2(a.len()) - a.len();
a.0.reserve(missing);
b.0.reserve(missing);
for _ in 0 .. missing {
a.0.push(Scalar::ZERO);
b.0.push(Scalar::ZERO);
}
Some(Self { a, b, alpha })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct WipProof {
pub(crate) L: Vec<EdwardsPoint>,
pub(crate) R: Vec<EdwardsPoint>,
pub(crate) A: EdwardsPoint,
pub(crate) B: EdwardsPoint,
pub(crate) r_answer: Scalar,
pub(crate) s_answer: Scalar,
pub(crate) delta_answer: Scalar,
}
impl WipStatement {
pub(crate) fn new(generators: Generators, P: EdwardsPoint, y: Scalar) -> Self {
debug_assert_eq!(generators.len(), padded_pow_of_2(generators.len()));
// y ** n
let mut y_vec = ScalarVector::new(generators.len());
y_vec[0] = y;
for i in 1 .. y_vec.len() {
y_vec[i] = y_vec[i - 1] * y;
}
Self { generators, P, y: y_vec }
}
fn transcript_L_R(transcript: &mut Scalar, L: EdwardsPoint, R: EdwardsPoint) -> Scalar {
let e = hash_to_scalar(
&[transcript.to_repr().as_ref(), L.to_bytes().as_ref(), R.to_bytes().as_ref()].concat(),
);
*transcript = e;
e
}
fn transcript_A_B(transcript: &mut Scalar, A: EdwardsPoint, B: EdwardsPoint) -> Scalar {
let e = hash_to_scalar(
&[transcript.to_repr().as_ref(), A.to_bytes().as_ref(), B.to_bytes().as_ref()].concat(),
);
*transcript = e;
e
}
// Prover's variant of the shared code block to calculate G/H/P when n > 1
// Returns each permutation of G/H since the prover needs to do operation on each permutation
// P is dropped as it's unused in the prover's path
// TODO: It'd still probably be faster to keep in terms of the original generators, both between
// the reduced amount of group operations and the potential tabling of the generators under
// multiexp
#[allow(clippy::too_many_arguments)]
fn next_G_H(
transcript: &mut Scalar,
mut g_bold1: PointVector,
mut g_bold2: PointVector,
mut h_bold1: PointVector,
mut h_bold2: PointVector,
L: EdwardsPoint,
R: EdwardsPoint,
y_inv_n_hat: Scalar,
) -> (Scalar, Scalar, Scalar, Scalar, PointVector, PointVector) {
debug_assert_eq!(g_bold1.len(), g_bold2.len());
debug_assert_eq!(h_bold1.len(), h_bold2.len());
debug_assert_eq!(g_bold1.len(), h_bold1.len());
let e = Self::transcript_L_R(transcript, L, R);
let inv_e = e.invert().unwrap();
// This vartime is safe as all of these arguments are public
let mut new_g_bold = Vec::with_capacity(g_bold1.len());
let e_y_inv = e * y_inv_n_hat;
for g_bold in g_bold1.0.drain(..).zip(g_bold2.0.drain(..)) {
new_g_bold.push(multiexp_vartime(&[(inv_e, g_bold.0), (e_y_inv, g_bold.1)]));
}
let mut new_h_bold = Vec::with_capacity(h_bold1.len());
for h_bold in h_bold1.0.drain(..).zip(h_bold2.0.drain(..)) {
new_h_bold.push(multiexp_vartime(&[(e, h_bold.0), (inv_e, h_bold.1)]));
}
let e_square = e.square();
let inv_e_square = inv_e.square();
(e, inv_e, e_square, inv_e_square, PointVector(new_g_bold), PointVector(new_h_bold))
}
/*
This has room for optimization worth investigating further. It currently takes
an iterative approach. It can be optimized further via divide and conquer.
Assume there are 4 challenges.
Iterative approach (current):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across that result and column 2.
3. Do the optimal multiplications across that result and column 3.
Divide and conquer (worth investigating further):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across challenge column 2 and 3.
3. Multiply both results together.
When there are 4 challenges (n=16), the iterative approach does 28 multiplications
versus divide and conquer's 24.
*/
fn challenge_products(challenges: &[(Scalar, Scalar)]) -> Vec<Scalar> {
let mut products = vec![Scalar::ONE; 1 << challenges.len()];
if !challenges.is_empty() {
products[0] = challenges[0].1;
products[1] = challenges[0].0;
for (j, challenge) in challenges.iter().enumerate().skip(1) {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * challenge.0;
products[slots - 1] = products[slots / 2] * challenge.1;
slots = slots.saturating_sub(2);
}
}
// Sanity check since if the above failed to populate, it'd be critical
for product in &products {
debug_assert!(!bool::from(product.is_zero()));
}
}
products
}
pub(crate) fn prove<R: RngCore + CryptoRng>(
self,
rng: &mut R,
mut transcript: Scalar,
witness: WipWitness,
) -> Option<WipProof> {
let WipStatement { generators, P, mut y } = self;
#[cfg(not(debug_assertions))]
let _ = P;
if generators.len() != witness.a.len() {
return None;
}
let (g, h) = (generators.g(), generators.h());
let mut g_bold = vec![];
let mut h_bold = vec![];
for i in 0 .. generators.len() {
g_bold.push(generators.generator(GeneratorsList::GBold1, i));
h_bold.push(generators.generator(GeneratorsList::HBold1, i));
}
let mut g_bold = PointVector(g_bold);
let mut h_bold = PointVector(h_bold);
// Check P has the expected relationship
#[cfg(debug_assertions)]
{
let mut P_terms = witness
.a
.0
.iter()
.copied()
.zip(g_bold.0.iter().copied())
.chain(witness.b.0.iter().copied().zip(h_bold.0.iter().copied()))
.collect::<Vec<_>>();
P_terms.push((weighted_inner_product(&witness.a, &witness.b, &y), g));
P_terms.push((witness.alpha, h));
debug_assert_eq!(multiexp(&P_terms), P);
P_terms.zeroize();
}
let mut a = witness.a.clone();
let mut b = witness.b.clone();
let mut alpha = witness.alpha;
// From here on, g_bold.len() is used as n
debug_assert_eq!(g_bold.len(), a.len());
let mut L_vec = vec![];
let mut R_vec = vec![];
// else n > 1 case from figure 1
while g_bold.len() > 1 {
let (a1, a2) = a.clone().split();
let (b1, b2) = b.clone().split();
let (g_bold1, g_bold2) = g_bold.split();
let (h_bold1, h_bold2) = h_bold.split();
let n_hat = g_bold1.len();
debug_assert_eq!(a1.len(), n_hat);
debug_assert_eq!(a2.len(), n_hat);
debug_assert_eq!(b1.len(), n_hat);
debug_assert_eq!(b2.len(), n_hat);
debug_assert_eq!(g_bold1.len(), n_hat);
debug_assert_eq!(g_bold2.len(), n_hat);
debug_assert_eq!(h_bold1.len(), n_hat);
debug_assert_eq!(h_bold2.len(), n_hat);
let y_n_hat = y[n_hat - 1];
y.0.truncate(n_hat);
let d_l = Scalar::random(&mut *rng);
let d_r = Scalar::random(&mut *rng);
let c_l = weighted_inner_product(&a1, &b2, &y);
let c_r = weighted_inner_product(&(a2.mul(y_n_hat)), &b1, &y);
// TODO: Calculate these with a batch inversion
let y_inv_n_hat = y_n_hat.invert().unwrap();
let mut L_terms = a1
.mul(y_inv_n_hat)
.0
.drain(..)
.zip(g_bold2.0.iter().copied())
.chain(b2.0.iter().copied().zip(h_bold1.0.iter().copied()))
.collect::<Vec<_>>();
L_terms.push((c_l, g));
L_terms.push((d_l, h));
let L = multiexp(&L_terms) * Scalar(crate::INV_EIGHT());
L_vec.push(L);
L_terms.zeroize();
let mut R_terms = a2
.mul(y_n_hat)
.0
.drain(..)
.zip(g_bold1.0.iter().copied())
.chain(b1.0.iter().copied().zip(h_bold2.0.iter().copied()))
.collect::<Vec<_>>();
R_terms.push((c_r, g));
R_terms.push((d_r, h));
let R = multiexp(&R_terms) * Scalar(crate::INV_EIGHT());
R_vec.push(R);
R_terms.zeroize();
let (e, inv_e, e_square, inv_e_square);
(e, inv_e, e_square, inv_e_square, g_bold, h_bold) =
Self::next_G_H(&mut transcript, g_bold1, g_bold2, h_bold1, h_bold2, L, R, y_inv_n_hat);
a = a1.mul(e).add_vec(&a2.mul(y_n_hat * inv_e));
b = b1.mul(inv_e).add_vec(&b2.mul(e));
alpha += (d_l * e_square) + (d_r * inv_e_square);
debug_assert_eq!(g_bold.len(), a.len());
debug_assert_eq!(g_bold.len(), h_bold.len());
debug_assert_eq!(g_bold.len(), b.len());
}
// n == 1 case from figure 1
debug_assert_eq!(g_bold.len(), 1);
debug_assert_eq!(h_bold.len(), 1);
debug_assert_eq!(a.len(), 1);
debug_assert_eq!(b.len(), 1);
let r = Scalar::random(&mut *rng);
let s = Scalar::random(&mut *rng);
let delta = Scalar::random(&mut *rng);
let eta = Scalar::random(&mut *rng);
let ry = r * y[0];
let mut A_terms =
vec![(r, g_bold[0]), (s, h_bold[0]), ((ry * b[0]) + (s * y[0] * a[0]), g), (delta, h)];
let A = multiexp(&A_terms) * Scalar(crate::INV_EIGHT());
A_terms.zeroize();
let mut B_terms = vec![(ry * s, g), (eta, h)];
let B = multiexp(&B_terms) * Scalar(crate::INV_EIGHT());
B_terms.zeroize();
let e = Self::transcript_A_B(&mut transcript, A, B);
let r_answer = r + (a[0] * e);
let s_answer = s + (b[0] * e);
let delta_answer = eta + (delta * e) + (alpha * e.square());
Some(WipProof { L: L_vec, R: R_vec, A, B, r_answer, s_answer, delta_answer })
}
pub(crate) fn verify<Id: Copy + Zeroize, R: RngCore + CryptoRng>(
self,
rng: &mut R,
verifier: &mut BatchVerifier<Id, EdwardsPoint>,
id: Id,
mut transcript: Scalar,
mut proof: WipProof,
) -> bool {
let WipStatement { generators, P, y } = self;
let (g, h) = (generators.g(), generators.h());
// Verify the L/R lengths
{
let mut lr_len = 0;
while (1 << lr_len) < generators.len() {
lr_len += 1;
}
if (proof.L.len() != lr_len) ||
(proof.R.len() != lr_len) ||
(generators.len() != (1 << lr_len))
{
return false;
}
}
let inv_y = {
let inv_y = y[0].invert().unwrap();
let mut res = Vec::with_capacity(y.len());
res.push(inv_y);
while res.len() < y.len() {
res.push(inv_y * res.last().unwrap());
}
res
};
let mut P_terms = vec![(Scalar::ONE, P)];
P_terms.reserve(6 + (2 * generators.len()) + proof.L.len());
let mut challenges = Vec::with_capacity(proof.L.len());
let product_cache = {
let mut es = Vec::with_capacity(proof.L.len());
for (L, R) in proof.L.iter_mut().zip(proof.R.iter_mut()) {
es.push(Self::transcript_L_R(&mut transcript, *L, *R));
*L = L.mul_by_cofactor();
*R = R.mul_by_cofactor();
}
let mut inv_es = es.clone();
let mut scratch = vec![Scalar::ZERO; es.len()];
group::ff::BatchInverter::invert_with_external_scratch(&mut inv_es, &mut scratch);
drop(scratch);
debug_assert_eq!(es.len(), inv_es.len());
debug_assert_eq!(es.len(), proof.L.len());
debug_assert_eq!(es.len(), proof.R.len());
for ((e, inv_e), (L, R)) in
es.drain(..).zip(inv_es.drain(..)).zip(proof.L.iter().zip(proof.R.iter()))
{
debug_assert_eq!(e.invert().unwrap(), inv_e);
challenges.push((e, inv_e));
let e_square = e.square();
let inv_e_square = inv_e.square();
P_terms.push((e_square, *L));
P_terms.push((inv_e_square, *R));
}
Self::challenge_products(&challenges)
};
let e = Self::transcript_A_B(&mut transcript, proof.A, proof.B);
proof.A = proof.A.mul_by_cofactor();
proof.B = proof.B.mul_by_cofactor();
let neg_e_square = -e.square();
let mut multiexp = P_terms;
multiexp.reserve(4 + (2 * generators.len()));
for (scalar, _) in multiexp.iter_mut() {
*scalar *= neg_e_square;
}
let re = proof.r_answer * e;
for i in 0 .. generators.len() {
let mut scalar = product_cache[i] * re;
if i > 0 {
scalar *= inv_y[i - 1];
}
multiexp.push((scalar, generators.generator(GeneratorsList::GBold1, i)));
}
let se = proof.s_answer * e;
for i in 0 .. generators.len() {
multiexp.push((
se * product_cache[product_cache.len() - 1 - i],
generators.generator(GeneratorsList::HBold1, i),
));
}
multiexp.push((-e, proof.A));
multiexp.push((proof.r_answer * y[0] * proof.s_answer, g));
multiexp.push((proof.delta_answer, h));
multiexp.push((-Scalar::ONE, proof.B));
verifier.queue(rng, id, multiexp);
true
}
}

View File

@@ -1,4 +1,5 @@
use core::ops::{Add, Sub, Mul, Index};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
@@ -11,6 +12,7 @@ use multiexp::multiexp;
pub(crate) struct ScalarVector(pub(crate) Vec<Scalar>);
macro_rules! math_op {
($Op: ident, $op: ident, $f: expr) => {
#[allow(clippy::redundant_closure_call)]
impl $Op<Scalar> for ScalarVector {
type Output = ScalarVector;
fn $op(self, b: Scalar) -> ScalarVector {
@@ -18,6 +20,7 @@ macro_rules! math_op {
}
}
#[allow(clippy::redundant_closure_call)]
impl $Op<Scalar> for &ScalarVector {
type Output = ScalarVector;
fn $op(self, b: Scalar) -> ScalarVector {
@@ -25,6 +28,7 @@ macro_rules! math_op {
}
}
#[allow(clippy::redundant_closure_call)]
impl $Op<ScalarVector> for ScalarVector {
type Output = ScalarVector;
fn $op(self, b: ScalarVector) -> ScalarVector {
@@ -33,6 +37,7 @@ macro_rules! math_op {
}
}
#[allow(clippy::redundant_closure_call)]
impl $Op<&ScalarVector> for &ScalarVector {
type Output = ScalarVector;
fn $op(self, b: &ScalarVector) -> ScalarVector {
@@ -48,38 +53,20 @@ math_op!(Mul, mul, |(a, b): (&Scalar, &Scalar)| *a * *b);
impl ScalarVector {
pub(crate) fn new(len: usize) -> ScalarVector {
ScalarVector(vec![Scalar::zero(); len])
ScalarVector(vec![Scalar::ZERO; len])
}
pub(crate) fn powers(x: Scalar, len: usize) -> ScalarVector {
debug_assert!(len != 0);
let mut res = Vec::with_capacity(len);
res.push(Scalar::one());
res.push(Scalar::ONE);
for i in 1 .. len {
res.push(res[i - 1] * x);
}
ScalarVector(res)
}
pub(crate) fn even_powers(x: Scalar, pow: usize) -> ScalarVector {
debug_assert!(pow != 0);
// Verify pow is a power of two
debug_assert_eq!(((pow - 1) & pow), 0);
let xsq = x * x;
let mut res = ScalarVector(Vec::with_capacity(pow / 2));
res.0.push(xsq);
let mut prev = 2;
while prev < pow {
res.0.push(res[res.len() - 1] * xsq);
prev += 2;
}
res
}
pub(crate) fn sum(mut self) -> Scalar {
self.0.drain(..).sum()
}
@@ -105,20 +92,11 @@ pub(crate) fn inner_product(a: &ScalarVector, b: &ScalarVector) -> Scalar {
(a * b).sum()
}
pub(crate) fn weighted_powers(x: Scalar, len: usize) -> ScalarVector {
ScalarVector(ScalarVector::powers(x, len + 1).0[1 ..].to_vec())
}
pub(crate) fn weighted_inner_product(a: &ScalarVector, b: &ScalarVector, y: Scalar) -> Scalar {
// y ** 0 is not used as a power
(a * b * weighted_powers(y, a.len())).sum()
}
impl Mul<&[EdwardsPoint]> for &ScalarVector {
type Output = EdwardsPoint;
fn mul(self, b: &[EdwardsPoint]) -> EdwardsPoint {
debug_assert_eq!(self.len(), b.len());
multiexp(&self.0.iter().cloned().zip(b.iter().cloned()).collect::<Vec<_>>())
multiexp(&self.0.iter().copied().zip(b.iter().copied()).collect::<Vec<_>>())
}
}

View File

@@ -1,10 +1,11 @@
#![allow(non_snake_case)]
use core::ops::Deref;
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use lazy_static::lazy_static;
use thiserror::Error;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
@@ -18,8 +19,8 @@ use curve25519_dalek::{
};
use crate::{
Commitment, random_scalar, hash_to_scalar, wallet::decoys::Decoys, ringct::hash_to_point,
serialize::*,
INV_EIGHT, Commitment, random_scalar, hash_to_scalar, wallet::decoys::Decoys,
ringct::hash_to_point, serialize::*,
};
#[cfg(feature = "multisig")]
@@ -29,28 +30,25 @@ pub use multisig::{ClsagDetails, ClsagAddendum, ClsagMultisig};
#[cfg(feature = "multisig")]
pub(crate) use multisig::add_key_image_share;
lazy_static! {
static ref INV_EIGHT: Scalar = Scalar::from(8u8).invert();
}
/// Errors returned when CLSAG signing fails.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)]
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum ClsagError {
#[error("internal error ({0})")]
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[error("invalid ring")]
#[cfg_attr(feature = "std", error("invalid ring"))]
InvalidRing,
#[error("invalid ring member (member {0}, ring size {1})")]
#[cfg_attr(feature = "std", error("invalid ring member (member {0}, ring size {1})"))]
InvalidRingMember(u8, u8),
#[error("invalid commitment")]
#[cfg_attr(feature = "std", error("invalid commitment"))]
InvalidCommitment,
#[error("invalid key image")]
#[cfg_attr(feature = "std", error("invalid key image"))]
InvalidImage,
#[error("invalid D")]
#[cfg_attr(feature = "std", error("invalid D"))]
InvalidD,
#[error("invalid s")]
#[cfg_attr(feature = "std", error("invalid s"))]
InvalidS,
#[error("invalid c1")]
#[cfg_attr(feature = "std", error("invalid c1"))]
InvalidC1,
}
@@ -103,7 +101,7 @@ fn core(
let n = ring.len();
let images_precomp = VartimeEdwardsPrecomputation::new([I, D]);
let D = D * *INV_EIGHT;
let D = D * INV_EIGHT();
// Generate the transcript
// Instead of generating multiple, a single transcript is created and then edited as needed
@@ -171,7 +169,7 @@ fn core(
}
// Perform the core loop
let mut c1 = CtOption::new(Scalar::zero(), Choice::from(0));
let mut c1 = CtOption::new(Scalar::ZERO, Choice::from(0));
for i in (start .. end).map(|i| i % n) {
// This will only execute once and shouldn't need to be constant time. Making it constant time
// removes the risk of branch prediction creating timing differences depending on ring index
@@ -181,10 +179,10 @@ fn core(
let c_p = mu_P * c;
let c_c = mu_C * c;
let L = (&s[i] * &ED25519_BASEPOINT_TABLE) + (c_p * P[i]) + (c_c * C[i]);
let PH = hash_to_point(P[i]);
let L = (&s[i] * ED25519_BASEPOINT_TABLE) + (c_p * P[i]) + (c_c * C[i]);
let PH = hash_to_point(&P[i]);
// Shouldn't be an issue as all of the variables in this vartime statement are public
let R = (s[i] * PH) + images_precomp.vartime_multiscalar_mul(&[c_p, c_c]);
let R = (s[i] * PH) + images_precomp.vartime_multiscalar_mul([c_p, c_c]);
to_hash.truncate(((2 * n) + 3) * 32);
to_hash.extend(L.compress().to_bytes());
@@ -221,7 +219,7 @@ impl Clsag {
let pseudo_out = Commitment::new(mask, input.commitment.amount).calculate();
let z = input.commitment.mask - mask;
let H = hash_to_point(input.decoys.ring[r][0]);
let H = hash_to_point(&input.decoys.ring[r][0]);
let D = H * z;
let mut s = Vec::with_capacity(input.decoys.ring.len());
for _ in 0 .. input.decoys.ring.len() {
@@ -243,7 +241,7 @@ impl Clsag {
msg: [u8; 32],
) -> Vec<(Clsag, EdwardsPoint)> {
let mut res = Vec::with_capacity(inputs.len());
let mut sum_pseudo_outs = Scalar::zero();
let mut sum_pseudo_outs = Scalar::ZERO;
for i in 0 .. inputs.len() {
let mut mask = random_scalar(rng);
if i == (inputs.len() - 1) {
@@ -259,9 +257,9 @@ impl Clsag {
&inputs[i].2,
mask,
&msg,
nonce.deref() * &ED25519_BASEPOINT_TABLE,
nonce.deref() * ED25519_BASEPOINT_TABLE,
nonce.deref() *
hash_to_point(inputs[i].2.decoys.ring[usize::from(inputs[i].2.decoys.i)][0]),
hash_to_point(&inputs[i].2.decoys.ring[usize::from(inputs[i].2.decoys.i)][0]),
);
clsag.s[usize::from(inputs[i].2.decoys.i)] =
(-((p * inputs[i].0.deref()) + c)) + nonce.deref();

View File

@@ -1,19 +1,13 @@
use core::{ops::Deref, fmt::Debug};
use std::{
io::{self, Read, Write},
sync::{Arc, RwLock},
};
use std_shims::io::{self, Read, Write};
use std::sync::{Arc, RwLock};
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use curve25519_dalek::{
traits::{Identity, IsIdentity},
scalar::Scalar,
edwards::EdwardsPoint,
};
use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
use group::{ff::Field, Group, GroupEncoding};
@@ -23,7 +17,7 @@ use dleq::DLEqProof;
use frost::{
dkg::lagrange,
curve::Ed25519,
FrostError, ThresholdKeys, ThresholdView,
Participant, FrostError, ThresholdKeys, ThresholdView,
algorithm::{WriteAddendum, Algorithm},
};
@@ -122,7 +116,7 @@ impl ClsagMultisig {
ClsagMultisig {
transcript,
H: hash_to_point(output_key),
H: hash_to_point(&output_key),
image: EdwardsPoint::identity(),
details,
@@ -145,11 +139,11 @@ pub(crate) fn add_key_image_share(
image: &mut EdwardsPoint,
generator: EdwardsPoint,
offset: Scalar,
included: &[u16],
participant: u16,
included: &[Participant],
participant: Participant,
share: EdwardsPoint,
) {
if image.is_identity() {
if image.is_identity().into() {
*image = generator * offset;
}
*image += share * lagrange::<dfg::Scalar>(participant, included).0;
@@ -190,10 +184,10 @@ impl Algorithm<Ed25519> for ClsagMultisig {
reader.read_exact(&mut bytes)?;
// dfg ensures the point is torsion free
let xH = Option::<dfg::EdwardsPoint>::from(dfg::EdwardsPoint::from_bytes(&bytes))
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid key image"))?;
.ok_or_else(|| io::Error::other("invalid key image"))?;
// Ensure this is a canonical point
if xH.to_bytes() != bytes {
Err(io::Error::new(io::ErrorKind::Other, "non-canonical key image"))?;
Err(io::Error::other("non-canonical key image"))?;
}
Ok(ClsagAddendum { key_image: xH, dleq: DLEqProof::<dfg::EdwardsPoint>::read(reader)? })
@@ -202,16 +196,16 @@ impl Algorithm<Ed25519> for ClsagMultisig {
fn process_addendum(
&mut self,
view: &ThresholdView<Ed25519>,
l: u16,
l: Participant,
addendum: ClsagAddendum,
) -> Result<(), FrostError> {
if self.image.is_identity() {
if self.image.is_identity().into() {
self.transcript.domain_separate(b"CLSAG");
self.input().transcript(&mut self.transcript);
self.transcript.append_message(b"mask", self.mask().to_bytes());
}
self.transcript.append_message(b"participant", l.to_be_bytes());
self.transcript.append_message(b"participant", l.to_bytes());
addendum
.dleq
@@ -304,7 +298,7 @@ impl Algorithm<Ed25519> for ClsagMultisig {
Ok(vec![
(share, dfg::EdwardsPoint::generator()),
(dfg::Scalar(interim.p), verification_share),
(-dfg::Scalar::one(), nonces[0][0]),
(-dfg::Scalar::ONE, nonces[0][0]),
])
}
}

View File

@@ -3,6 +3,6 @@ use curve25519_dalek::edwards::EdwardsPoint;
pub use monero_generators::{hash_to_point as raw_hash_to_point};
/// Monero's hash to point function, as named `ge_fromfe_frombytes_vartime`.
pub fn hash_to_point(key: EdwardsPoint) -> EdwardsPoint {
pub fn hash_to_point(key: &EdwardsPoint) -> EdwardsPoint {
raw_hash_to_point(key.compress().to_bytes())
}

View File

@@ -0,0 +1,213 @@
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use curve25519_dalek::{traits::IsIdentity, Scalar, EdwardsPoint};
use monero_generators::H;
use crate::{hash_to_scalar, ringct::hash_to_point, serialize::*};
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum MlsagError {
#[cfg_attr(feature = "std", error("invalid ring"))]
InvalidRing,
#[cfg_attr(feature = "std", error("invalid amount of key images"))]
InvalidAmountOfKeyImages,
#[cfg_attr(feature = "std", error("invalid ss"))]
InvalidSs,
#[cfg_attr(feature = "std", error("key image was identity"))]
IdentityKeyImage,
#[cfg_attr(feature = "std", error("invalid ci"))]
InvalidCi,
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct RingMatrix {
matrix: Vec<Vec<EdwardsPoint>>,
}
impl RingMatrix {
pub fn new(matrix: Vec<Vec<EdwardsPoint>>) -> Result<Self, MlsagError> {
if matrix.is_empty() {
Err(MlsagError::InvalidRing)?;
}
for member in &matrix {
if member.is_empty() || (member.len() != matrix[0].len()) {
Err(MlsagError::InvalidRing)?;
}
}
Ok(RingMatrix { matrix })
}
/// Construct a ring matrix for an individual output.
pub fn individual(
ring: &[[EdwardsPoint; 2]],
pseudo_out: EdwardsPoint,
) -> Result<Self, MlsagError> {
let mut matrix = Vec::with_capacity(ring.len());
for ring_member in ring {
matrix.push(vec![ring_member[0], ring_member[1] - pseudo_out]);
}
RingMatrix::new(matrix)
}
pub fn iter(&self) -> impl Iterator<Item = &[EdwardsPoint]> {
self.matrix.iter().map(AsRef::as_ref)
}
/// Return the amount of members in the ring.
pub fn members(&self) -> usize {
self.matrix.len()
}
/// Returns the length of a ring member.
///
/// A ring member is a vector of points for which the signer knows all of the discrete logarithms
/// of.
pub fn member_len(&self) -> usize {
// this is safe to do as the constructors don't allow empty rings
self.matrix[0].len()
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Mlsag {
pub ss: Vec<Vec<Scalar>>,
pub cc: Scalar,
}
impl Mlsag {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for ss in &self.ss {
write_raw_vec(write_scalar, ss, w)?;
}
write_scalar(&self.cc, w)
}
pub fn read<R: Read>(mixins: usize, ss_2_elements: usize, r: &mut R) -> io::Result<Mlsag> {
Ok(Mlsag {
ss: (0 .. mixins)
.map(|_| read_raw_vec(read_scalar, ss_2_elements, r))
.collect::<Result<_, _>>()?,
cc: read_scalar(r)?,
})
}
pub fn verify(
&self,
msg: &[u8; 32],
ring: &RingMatrix,
key_images: &[EdwardsPoint],
) -> Result<(), MlsagError> {
// Mlsag allows for layers to not need linkability, hence they don't need key images
// Monero requires that there is always only 1 non-linkable layer - the amount commitments.
if ring.member_len() != (key_images.len() + 1) {
Err(MlsagError::InvalidAmountOfKeyImages)?;
}
let mut buf = Vec::with_capacity(6 * 32);
buf.extend_from_slice(msg);
let mut ci = self.cc;
// This is an iterator over the key images as options with an added entry of `None` at the
// end for the non-linkable layer
let key_images_iter = key_images.iter().map(|ki| Some(*ki)).chain(core::iter::once(None));
if ring.matrix.len() != self.ss.len() {
Err(MlsagError::InvalidSs)?;
}
for (ring_member, ss) in ring.iter().zip(&self.ss) {
if ring_member.len() != ss.len() {
Err(MlsagError::InvalidSs)?;
}
for ((ring_member_entry, s), ki) in ring_member.iter().zip(ss).zip(key_images_iter.clone()) {
#[allow(non_snake_case)]
let L = EdwardsPoint::vartime_double_scalar_mul_basepoint(&ci, ring_member_entry, s);
buf.extend_from_slice(ring_member_entry.compress().as_bytes());
buf.extend_from_slice(L.compress().as_bytes());
// Not all dimensions need to be linkable, e.g. commitments, and only linkable layers need
// to have key images.
if let Some(ki) = ki {
if ki.is_identity() {
Err(MlsagError::IdentityKeyImage)?;
}
#[allow(non_snake_case)]
let R = (s * hash_to_point(ring_member_entry)) + (ci * ki);
buf.extend_from_slice(R.compress().as_bytes());
}
}
ci = hash_to_scalar(&buf);
// keep the msg in the buffer.
buf.drain(msg.len() ..);
}
if ci != self.cc {
Err(MlsagError::InvalidCi)?
}
Ok(())
}
}
/// An aggregate ring matrix builder, usable to set up the ring matrix to prove/verify an aggregate
/// MLSAG signature.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct AggregateRingMatrixBuilder {
key_ring: Vec<Vec<EdwardsPoint>>,
amounts_ring: Vec<EdwardsPoint>,
sum_out: EdwardsPoint,
}
impl AggregateRingMatrixBuilder {
/// Create a new AggregateRingMatrixBuilder.
///
/// Takes in the transaction's outputs; commitments and fee.
pub fn new(commitments: &[EdwardsPoint], fee: u64) -> Self {
AggregateRingMatrixBuilder {
key_ring: vec![],
amounts_ring: vec![],
sum_out: commitments.iter().sum::<EdwardsPoint>() + (H() * Scalar::from(fee)),
}
}
/// Push a ring of [output key, commitment] to the matrix.
pub fn push_ring(&mut self, ring: &[[EdwardsPoint; 2]]) -> Result<(), MlsagError> {
if self.key_ring.is_empty() {
self.key_ring = vec![vec![]; ring.len()];
// Now that we know the length of the ring, fill the `amounts_ring`.
self.amounts_ring = vec![-self.sum_out; ring.len()];
}
if (self.amounts_ring.len() != ring.len()) || ring.is_empty() {
// All the rings in an aggregate matrix must be the same length.
return Err(MlsagError::InvalidRing);
}
for (i, ring_member) in ring.iter().enumerate() {
self.key_ring[i].push(ring_member[0]);
self.amounts_ring[i] += ring_member[1]
}
Ok(())
}
/// Build and return the [`RingMatrix`]
pub fn build(mut self) -> Result<RingMatrix, MlsagError> {
for (i, amount_commitment) in self.amounts_ring.drain(..).enumerate() {
self.key_ring[i].push(amount_commitment);
}
RingMatrix::new(self.key_ring)
}
}

View File

@@ -1,65 +1,188 @@
use core::ops::Deref;
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroizing;
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub(crate) mod hash_to_point;
pub use hash_to_point::{raw_hash_to_point, hash_to_point};
/// MLSAG struct, along with verifying functionality.
pub mod mlsag;
/// CLSAG struct, along with signing and verifying functionality.
pub mod clsag;
/// BorromeanRange struct, along with verifying functionality.
pub mod borromean;
/// Bulletproofs(+) structs, along with proving and verifying functionality.
pub mod bulletproofs;
use crate::{
Protocol,
serialize::*,
ringct::{clsag::Clsag, bulletproofs::Bulletproofs},
ringct::{mlsag::Mlsag, clsag::Clsag, borromean::BorromeanRange, bulletproofs::Bulletproofs},
};
/// Generate a key image for a given key. Defined as `x * hash_to_point(xG)`.
pub fn generate_key_image(secret: &Zeroizing<Scalar>) -> EdwardsPoint {
hash_to_point(&ED25519_BASEPOINT_TABLE * secret.deref()) * secret.deref()
hash_to_point(&(ED25519_BASEPOINT_TABLE * secret.deref())) * secret.deref()
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum EncryptedAmount {
Original { mask: [u8; 32], amount: [u8; 32] },
Compact { amount: [u8; 8] },
}
impl EncryptedAmount {
pub fn read<R: Read>(compact: bool, r: &mut R) -> io::Result<EncryptedAmount> {
Ok(if !compact {
EncryptedAmount::Original { mask: read_bytes(r)?, amount: read_bytes(r)? }
} else {
EncryptedAmount::Compact { amount: read_bytes(r)? }
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
EncryptedAmount::Original { mask, amount } => {
w.write_all(mask)?;
w.write_all(amount)
}
EncryptedAmount::Compact { amount } => w.write_all(amount),
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum RctType {
/// No RCT proofs.
Null,
/// One MLSAG for multiple inputs and Borromean range proofs (RCTTypeFull).
MlsagAggregate,
// One MLSAG for each input and a Borromean range proof (RCTTypeSimple).
MlsagIndividual,
// One MLSAG for each input and a Bulletproof (RCTTypeBulletproof).
Bulletproofs,
/// One MLSAG for each input and a Bulletproof, yet starting to use EncryptedAmount::Compact
/// (RCTTypeBulletproof2).
BulletproofsCompactAmount,
/// One CLSAG for each input and a Bulletproof (RCTTypeCLSAG).
Clsag,
/// One CLSAG for each input and a Bulletproof+ (RCTTypeBulletproofPlus).
BulletproofsPlus,
}
impl RctType {
pub fn to_byte(self) -> u8 {
match self {
RctType::Null => 0,
RctType::MlsagAggregate => 1,
RctType::MlsagIndividual => 2,
RctType::Bulletproofs => 3,
RctType::BulletproofsCompactAmount => 4,
RctType::Clsag => 5,
RctType::BulletproofsPlus => 6,
}
}
pub fn from_byte(byte: u8) -> Option<Self> {
Some(match byte {
0 => RctType::Null,
1 => RctType::MlsagAggregate,
2 => RctType::MlsagIndividual,
3 => RctType::Bulletproofs,
4 => RctType::BulletproofsCompactAmount,
5 => RctType::Clsag,
6 => RctType::BulletproofsPlus,
_ => None?,
})
}
pub fn compact_encrypted_amounts(&self) -> bool {
match self {
RctType::Null => false,
RctType::MlsagAggregate => false,
RctType::MlsagIndividual => false,
RctType::Bulletproofs => false,
RctType::BulletproofsCompactAmount => true,
RctType::Clsag => true,
RctType::BulletproofsPlus => true,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctBase {
pub fee: u64,
pub ecdh_info: Vec<[u8; 8]>,
pub pseudo_outs: Vec<EdwardsPoint>,
pub encrypted_amounts: Vec<EncryptedAmount>,
pub commitments: Vec<EdwardsPoint>,
}
impl RctBase {
pub(crate) fn fee_weight(outputs: usize) -> usize {
1 + 8 + (outputs * (8 + 32))
pub(crate) fn fee_weight(outputs: usize, fee: u64) -> usize {
// 1 byte for the RCT signature type
1 + (outputs * (8 + 32)) + varint_len(fee)
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: u8) -> io::Result<()> {
w.write_all(&[rct_type])?;
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
w.write_all(&[rct_type.to_byte()])?;
match rct_type {
0 => Ok(()),
5 | 6 => {
RctType::Null => Ok(()),
_ => {
write_varint(&self.fee, w)?;
for ecdh in &self.ecdh_info {
w.write_all(ecdh)?;
if rct_type == RctType::MlsagIndividual {
write_raw_vec(write_point, &self.pseudo_outs, w)?;
}
for encrypted_amount in &self.encrypted_amounts {
encrypted_amount.write(w)?;
}
write_raw_vec(write_point, &self.commitments, w)
}
_ => panic!("Serializing unknown RctType's Base"),
}
}
pub fn read<R: Read>(outputs: usize, r: &mut R) -> io::Result<(RctBase, u8)> {
let rct_type = read_byte(r)?;
pub fn read<R: Read>(inputs: usize, outputs: usize, r: &mut R) -> io::Result<(RctBase, RctType)> {
let rct_type =
RctType::from_byte(read_byte(r)?).ok_or_else(|| io::Error::other("invalid RCT type"))?;
match rct_type {
RctType::Null => {}
RctType::MlsagAggregate => {}
RctType::MlsagIndividual => {}
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag |
RctType::BulletproofsPlus => {
if outputs == 0 {
// Because the Bulletproofs(+) layout must be canonical, there must be 1 Bulletproof if
// Bulletproofs are in use
// If there are Bulletproofs, there must be a matching amount of outputs, implicitly
// banning 0 outputs
// Since HF 12 (CLSAG being 13), a 2-output minimum has also been enforced
Err(io::Error::other("RCT with Bulletproofs(+) had 0 outputs"))?;
}
}
}
Ok((
if rct_type == 0 {
RctBase { fee: 0, ecdh_info: vec![], commitments: vec![] }
if rct_type == RctType::Null {
RctBase { fee: 0, pseudo_outs: vec![], encrypted_amounts: vec![], commitments: vec![] }
} else {
RctBase {
fee: read_varint(r)?,
ecdh_info: (0 .. outputs).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
pseudo_outs: if rct_type == RctType::MlsagIndividual {
read_raw_vec(read_point, inputs, r)?
} else {
vec![]
},
encrypted_amounts: (0 .. outputs)
.map(|_| EncryptedAmount::read(rct_type.compact_encrypted_amounts(), r))
.collect::<Result<_, _>>()?,
commitments: read_raw_vec(read_point, outputs, r)?,
}
},
@@ -71,67 +194,139 @@ impl RctBase {
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum RctPrunable {
Null,
Clsag { bulletproofs: Vec<Bulletproofs>, clsags: Vec<Clsag>, pseudo_outs: Vec<EdwardsPoint> },
AggregateMlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsag: Mlsag,
},
MlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsags: Vec<Mlsag>,
},
MlsagBulletproofs {
bulletproofs: Bulletproofs,
mlsags: Vec<Mlsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
Clsag {
bulletproofs: Bulletproofs,
clsags: Vec<Clsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
}
impl RctPrunable {
/// RCT Type byte for a given RctPrunable struct.
pub fn rct_type(&self) -> u8 {
match self {
RctPrunable::Null => 0,
RctPrunable::Clsag { bulletproofs, .. } => {
if matches!(bulletproofs[0], Bulletproofs::Original { .. }) {
5
} else {
6
}
}
}
}
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
// 1 byte for number of BPs (technically a VarInt, yet there's always just zero or one)
1 + Bulletproofs::fee_weight(protocol.bp_plus(), outputs) +
(inputs * (Clsag::fee_weight(protocol.ring_len()) + 32))
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
match self {
RctPrunable::Null => Ok(()),
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs, .. } => {
write_vec(Bulletproofs::write, bulletproofs, w)?;
RctPrunable::AggregateMlsagBorromean { borromean, mlsag } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
mlsag.write(w)
}
RctPrunable::MlsagBorromean { borromean, mlsags } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
write_raw_vec(Mlsag::write, mlsags, w)
}
RctPrunable::MlsagBulletproofs { bulletproofs, mlsags, pseudo_outs } => {
if rct_type == RctType::Bulletproofs {
w.write_all(&1u32.to_le_bytes())?;
} else {
w.write_all(&[1])?;
}
bulletproofs.write(w)?;
write_raw_vec(Mlsag::write, mlsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs } => {
w.write_all(&[1])?;
bulletproofs.write(w)?;
write_raw_vec(Clsag::write, clsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
}
}
pub fn serialize(&self) -> Vec<u8> {
pub fn serialize(&self, rct_type: RctType) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
self.write(&mut serialized, rct_type).unwrap();
serialized
}
pub fn read<R: Read>(rct_type: u8, decoys: &[usize], r: &mut R) -> io::Result<RctPrunable> {
pub fn read<R: Read>(
rct_type: RctType,
decoys: &[usize],
outputs: usize,
r: &mut R,
) -> io::Result<RctPrunable> {
// While we generally don't bother with misc consensus checks, this affects the safety of
// the below defined rct_type function
// The exact line preventing zero-input transactions is:
// https://github.com/monero-project/monero/blob/00fd416a99686f0956361d1cd0337fe56e58d4a7/
// src/ringct/rctSigs.cpp#L609
// And then for RctNull, that's only allowed for miner TXs which require one input of
// Input::Gen
if decoys.is_empty() {
Err(io::Error::other("transaction had no inputs"))?;
}
Ok(match rct_type {
0 => RctPrunable::Null,
5 | 6 => RctPrunable::Clsag {
bulletproofs: read_vec(
if rct_type == 5 { Bulletproofs::read } else { Bulletproofs::read_plus },
r,
)?,
RctType::Null => RctPrunable::Null,
RctType::MlsagAggregate => RctPrunable::AggregateMlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsag: Mlsag::read(decoys[0], decoys.len() + 1, r)?,
},
RctType::MlsagIndividual => RctPrunable::MlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsags: decoys.iter().map(|d| Mlsag::read(*d, 2, r)).collect::<Result<_, _>>()?,
},
RctType::Bulletproofs | RctType::BulletproofsCompactAmount => {
RctPrunable::MlsagBulletproofs {
bulletproofs: {
if (if rct_type == RctType::Bulletproofs {
u64::from(read_u32(r)?)
} else {
read_varint(r)?
}) != 1
{
Err(io::Error::other("n bulletproofs instead of one"))?;
}
Bulletproofs::read(r)?
},
mlsags: decoys.iter().map(|d| Mlsag::read(*d, 2, r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, decoys.len(), r)?,
}
}
RctType::Clsag | RctType::BulletproofsPlus => RctPrunable::Clsag {
bulletproofs: {
if read_varint::<_, u64>(r)? != 1 {
Err(io::Error::other("n bulletproofs instead of one"))?;
}
(if rct_type == RctType::Clsag { Bulletproofs::read } else { Bulletproofs::read_plus })(
r,
)?
},
clsags: (0 .. decoys.len()).map(|o| Clsag::read(decoys[o], r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, decoys.len(), r)?,
},
_ => Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown RCT type"))?,
})
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
RctPrunable::Null => panic!("Serializing RctPrunable::Null for a signature"),
RctPrunable::Clsag { bulletproofs, .. } => {
bulletproofs.iter().try_for_each(|bp| bp.signature_write(w))
RctPrunable::AggregateMlsagBorromean { borromean, .. } |
RctPrunable::MlsagBorromean { borromean, .. } => {
borromean.iter().try_for_each(|rs| rs.write(w))
}
RctPrunable::MlsagBulletproofs { bulletproofs, .. } => bulletproofs.signature_write(w),
RctPrunable::Clsag { bulletproofs, .. } => bulletproofs.signature_write(w),
}
}
}
@@ -143,13 +338,46 @@ pub struct RctSignatures {
}
impl RctSignatures {
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
RctBase::fee_weight(outputs) + RctPrunable::fee_weight(protocol, inputs, outputs)
/// RctType for a given RctSignatures struct.
pub fn rct_type(&self) -> RctType {
match &self.prunable {
RctPrunable::Null => RctType::Null,
RctPrunable::AggregateMlsagBorromean { .. } => RctType::MlsagAggregate,
RctPrunable::MlsagBorromean { .. } => RctType::MlsagIndividual,
// RctBase ensures there's at least one output, making the following
// inferences guaranteed/expects impossible on any valid RctSignatures
RctPrunable::MlsagBulletproofs { .. } => {
if matches!(
self
.base
.encrypted_amounts
.first()
.expect("MLSAG with Bulletproofs didn't have any outputs"),
EncryptedAmount::Original { .. }
) {
RctType::Bulletproofs
} else {
RctType::BulletproofsCompactAmount
}
}
RctPrunable::Clsag { bulletproofs, .. } => {
if matches!(bulletproofs, Bulletproofs::Original { .. }) {
RctType::Clsag
} else {
RctType::BulletproofsPlus
}
}
}
}
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize, fee: u64) -> usize {
RctBase::fee_weight(outputs, fee) + RctPrunable::fee_weight(protocol, inputs, outputs)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.base.write(w, self.prunable.rct_type())?;
self.prunable.write(w)
let rct_type = self.rct_type();
self.base.write(w, rct_type)?;
self.prunable.write(w, rct_type)
}
pub fn serialize(&self) -> Vec<u8> {
@@ -159,7 +387,7 @@ impl RctSignatures {
}
pub fn read<R: Read>(decoys: Vec<usize>, outputs: usize, r: &mut R) -> io::Result<RctSignatures> {
let base = RctBase::read(outputs, r)?;
Ok(RctSignatures { base: base.0, prunable: RctPrunable::read(base.1, &decoys, r)? })
let base = RctBase::read(decoys.len(), outputs, r)?;
Ok(RctSignatures { base: base.0, prunable: RctPrunable::read(base.1, &decoys, outputs, r)? })
}
}

View File

@@ -1,517 +0,0 @@
use std::fmt::Debug;
use thiserror::Error;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use serde::{Serialize, Deserialize, de::DeserializeOwned};
use serde_json::{Value, json};
use digest_auth::AuthContext;
use reqwest::{Client, RequestBuilder};
use crate::{
Protocol,
transaction::{Input, Timelock, Transaction},
block::Block,
wallet::Fee,
};
#[derive(Deserialize, Debug)]
pub struct EmptyResponse {}
#[derive(Deserialize, Debug)]
pub struct JsonRpcResponse<T> {
result: T,
}
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
block_height: Option<usize>,
as_hex: String,
pruned_as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum RpcError {
#[error("internal error ({0})")]
InternalError(&'static str),
#[error("connection error")]
ConnectionError,
#[error("invalid node")]
InvalidNode,
#[error("transactions not found")]
TransactionsNotFound(Vec<[u8; 32]>),
#[error("invalid point ({0})")]
InvalidPoint(String),
#[error("pruned transaction")]
PrunedTransaction,
#[error("invalid transaction ({0:?})")]
InvalidTransaction([u8; 32]),
}
fn rpc_hex(value: &str) -> Result<Vec<u8>, RpcError> {
hex::decode(value).map_err(|_| RpcError::InvalidNode)
}
fn hash_hex(hash: &str) -> Result<[u8; 32], RpcError> {
rpc_hex(hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
fn rpc_point(point: &str) -> Result<EdwardsPoint, RpcError> {
CompressedEdwardsY(
rpc_hex(point)?.try_into().map_err(|_| RpcError::InvalidPoint(point.to_string()))?,
)
.decompress()
.ok_or_else(|| RpcError::InvalidPoint(point.to_string()))
}
#[derive(Clone, Debug)]
pub struct Rpc {
client: Client,
userpass: Option<(String, String)>,
url: String,
}
impl Rpc {
/// Create a new RPC connection.
/// A daemon requiring authentication can be used via including the username and password in the
/// URL.
pub fn new(mut url: String) -> Result<Rpc, RpcError> {
// Parse out the username and password
let userpass = if url.contains('@') {
let url_clone = url.clone();
let split_url = url_clone.split('@').collect::<Vec<_>>();
if split_url.len() != 2 {
Err(RpcError::InvalidNode)?;
}
let mut userpass = split_url[0];
url = split_url[1].to_string();
// If there was additionally a protocol string, restore that to the daemon URL
if userpass.contains("://") {
let split_userpass = userpass.split("://").collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
url = split_userpass[0].to_string() + "://" + &url;
userpass = split_userpass[1];
}
let split_userpass = userpass.split(':').collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::InvalidNode)?;
}
Some((split_userpass[0].to_string(), split_userpass[1].to_string()))
} else {
None
};
Ok(Rpc { client: Client::new(), userpass, url })
}
/// Perform a RPC call to the specified method with the provided parameters.
/// This is NOT a JSON-RPC call, which use a method of "json_rpc" and are available via
/// `json_rpc_call`.
pub async fn rpc_call<Params: Serialize + Debug, Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Params>,
) -> Result<Response, RpcError> {
let mut builder = self.client.post(self.url.clone() + "/" + method);
if let Some(params) = params.as_ref() {
builder = builder.json(params);
}
self.call_tail(method, builder).await
}
/// Perform a JSON-RPC call to the specified method with the provided parameters
pub async fn json_rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Value>,
) -> Result<Response, RpcError> {
let mut req = json!({ "method": method });
if let Some(params) = params {
req.as_object_mut().unwrap().insert("params".into(), params);
}
Ok(self.rpc_call::<_, JsonRpcResponse<Response>>("json_rpc", Some(req)).await?.result)
}
/// Perform a binary call to the specified method with the provided parameters.
pub async fn bin_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Vec<u8>,
) -> Result<Response, RpcError> {
let builder = self.client.post(self.url.clone() + "/" + method).body(params.clone());
self.call_tail(method, builder.header("Content-Type", "application/octet-stream")).await
}
async fn call_tail<Response: DeserializeOwned + Debug>(
&self,
method: &str,
mut builder: RequestBuilder,
) -> Result<Response, RpcError> {
if let Some((user, pass)) = &self.userpass {
let req = self.client.post(&self.url).send().await.map_err(|_| RpcError::InvalidNode)?;
// Only provide authentication if this daemon actually expects it
if let Some(header) = req.headers().get("www-authenticate") {
builder = builder.header(
"Authorization",
digest_auth::parse(header.to_str().map_err(|_| RpcError::InvalidNode)?)
.map_err(|_| RpcError::InvalidNode)?
.respond(&AuthContext::new_post::<_, _, _, &[u8]>(
user,
pass,
"/".to_string() + method,
None,
))
.map_err(|_| RpcError::InvalidNode)?
.to_header_string(),
);
}
}
let res = builder.send().await.map_err(|_| RpcError::ConnectionError)?;
Ok(if !method.ends_with(".bin") {
serde_json::from_str(&res.text().await.map_err(|_| RpcError::ConnectionError)?)
.map_err(|_| RpcError::InternalError("Failed to parse JSON response"))?
} else {
monero_epee_bin_serde::from_bytes(&res.bytes().await.map_err(|_| RpcError::ConnectionError)?)
.map_err(|_| RpcError::InternalError("Failed to parse binary response"))?
})
}
/// Get the active blockchain protocol version.
pub async fn get_protocol(&self) -> Result<Protocol, RpcError> {
#[derive(Deserialize, Debug)]
struct ProtocolResponse {
major_version: usize,
}
#[derive(Deserialize, Debug)]
struct LastHeaderResponse {
block_header: ProtocolResponse,
}
Ok(
match self
.json_rpc_call::<LastHeaderResponse>("get_last_block_header", None)
.await?
.block_header
.major_version
{
13 | 14 => Protocol::v14,
15 | 16 => Protocol::v16,
version => Protocol::Unsupported(version),
},
)
}
pub async fn get_height(&self) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct HeightResponse {
height: usize,
}
Ok(self.rpc_call::<Option<()>, HeightResponse>("get_height", None).await?.height)
}
pub async fn get_transactions(&self, hashes: &[[u8; 32]]) -> Result<Vec<Transaction>, RpcError> {
if hashes.is_empty() {
return Ok(vec![]);
}
let txs: TransactionsResponse = self
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes.iter().map(hex::encode).collect::<Vec<_>>()
})),
)
.await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
txs
.txs
.iter()
.map(|res| {
let tx = Transaction::read::<&[u8]>(
&mut rpc_hex(if !res.as_hex.is_empty() { &res.as_hex } else { &res.pruned_as_hex })?
.as_ref(),
)
.map_err(|_| match hash_hex(&res.tx_hash) {
Ok(hash) => RpcError::InvalidTransaction(hash),
Err(err) => err,
})?;
// https://github.com/monero-project/monero/issues/8311
if res.as_hex.is_empty() {
match tx.prefix.inputs.get(0) {
Some(Input::Gen { .. }) => (),
_ => Err(RpcError::PrunedTransaction)?,
}
}
Ok(tx)
})
.collect()
}
pub async fn get_transaction(&self, tx: [u8; 32]) -> Result<Transaction, RpcError> {
self.get_transactions(&[tx]).await.map(|mut txs| txs.swap_remove(0))
}
pub async fn get_transaction_block_number(&self, tx: &[u8]) -> Result<Option<usize>, RpcError> {
let txs: TransactionsResponse =
self.rpc_call("get_transactions", Some(json!({ "txs_hashes": [hex::encode(tx)] }))).await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
Ok(txs.txs[0].block_height)
}
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
#[derive(Deserialize, Debug)]
struct BlockHeaderResponse {
hash: String,
}
#[derive(Deserialize, Debug)]
struct BlockHeaderByHeightResponse {
block_header: BlockHeaderResponse,
}
let header: BlockHeaderByHeightResponse =
self.json_rpc_call("get_block_header_by_height", Some(json!({ "height": number }))).await?;
rpc_hex(&header.block_header.hash)?.try_into().map_err(|_| RpcError::InvalidNode)
}
pub async fn get_block(&self, hash: [u8; 32]) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await?;
Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref()).map_err(|_| RpcError::InvalidNode)
}
pub async fn get_block_by_number(&self, number: usize) -> Result<Block, RpcError> {
self.get_block(self.get_block_hash(number).await?).await
}
pub async fn get_block_transactions(&self, hash: [u8; 32]) -> Result<Vec<Transaction>, RpcError> {
let block = self.get_block(hash).await?;
let mut res = vec![block.miner_tx];
res.extend(self.get_transactions(&block.txs).await?);
Ok(res)
}
pub async fn get_block_transactions_by_number(
&self,
number: usize,
) -> Result<Vec<Transaction>, RpcError> {
self.get_block_transactions(self.get_block_hash(number).await?).await
}
/// Get the output indexes of the specified transaction.
pub async fn get_o_indexes(&self, hash: [u8; 32]) -> Result<Vec<u64>, RpcError> {
#[derive(Serialize, Debug)]
struct Request {
txid: [u8; 32],
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct OIndexes {
o_indexes: Vec<u64>,
status: String,
untrusted: bool,
credits: usize,
top_hash: String,
}
let indexes: OIndexes = self
.bin_call(
"get_o_indexes.bin",
monero_epee_bin_serde::to_bytes(&Request { txid: hash }).unwrap(),
)
.await?;
Ok(indexes.o_indexes)
}
/// Get the output distribution, from the specified height to the specified height (both
/// inclusive).
pub async fn get_output_distribution(
&self,
from: usize,
to: usize,
) -> Result<Vec<u64>, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distribution {
distribution: Vec<u64>,
}
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct Distributions {
distributions: Vec<Distribution>,
}
let mut distributions: Distributions = self
.json_rpc_call(
"get_output_distribution",
Some(json!({
"binary": false,
"amounts": [0],
"cumulative": true,
"from_height": from,
"to_height": to,
})),
)
.await?;
Ok(distributions.distributions.swap_remove(0).distribution)
}
/// Get the specified outputs from the RingCT (zero-amount) pool, but only return them if they're
/// unlocked.
pub async fn get_unlocked_outputs(
&self,
indexes: &[u64],
height: usize,
) -> Result<Vec<Option<[EdwardsPoint; 2]>>, RpcError> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
txid: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = self
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": 0,
"index": o
})).collect::<Vec<_>>()
})),
)
.await?;
let txs = self
.get_transactions(
&outs
.outs
.iter()
.map(|out| rpc_hex(&out.txid)?.try_into().map_err(|_| RpcError::InvalidNode))
.collect::<Result<Vec<_>, _>>()?,
)
.await?;
// TODO: https://github.com/serai-dex/serai/issues/104
outs
.outs
.iter()
.enumerate()
.map(|(i, out)| {
Ok(Some([rpc_point(&out.key)?, rpc_point(&out.mask)?]).filter(|_| {
match txs[i].prefix.timelock {
Timelock::Block(t_height) => t_height <= height,
_ => false,
}
}))
})
.collect()
}
/// Get the currently estimated fee from the node. This may be manipulated to unsafe levels and
/// MUST be sanity checked.
// TODO: Take a sanity check argument
pub async fn get_fee(&self) -> Result<Fee, RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct FeeResponse {
fee: u64,
quantization_mask: u64,
}
let res: FeeResponse = self.json_rpc_call("get_fee_estimate", None).await?;
Ok(Fee { per_weight: res.fee, mask: res.quantization_mask })
}
pub async fn publish_transaction(&self, tx: &Transaction) -> Result<(), RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SendRawResponse {
status: String,
double_spend: bool,
fee_too_low: bool,
invalid_input: bool,
invalid_output: bool,
low_mixin: bool,
not_relayed: bool,
overspend: bool,
too_big: bool,
too_few_outputs: bool,
reason: String,
}
let mut buf = Vec::with_capacity(2048);
tx.write(&mut buf).unwrap();
let res: SendRawResponse = self
.rpc_call("send_raw_transaction", Some(json!({ "tx_as_hex": hex::encode(&buf) })))
.await?;
if res.status != "OK" {
Err(RpcError::InvalidTransaction(tx.hash()))?;
}
Ok(())
}
pub async fn generate_blocks(&self, address: &str, block_count: usize) -> Result<(), RpcError> {
self
.rpc_call::<_, EmptyResponse>(
"json_rpc",
Some(json!({
"method": "generateblocks",
"params": {
"wallet_address": address,
"amount_of_blocks": block_count
},
})),
)
.await?;
Ok(())
}
}

View File

@@ -0,0 +1,259 @@
use std::{sync::Arc, io::Read};
use async_trait::async_trait;
use tokio::sync::Mutex;
use digest_auth::{WwwAuthenticateHeader, AuthContext};
use simple_request::{
hyper::{StatusCode, header::HeaderValue, Request},
Response, Client,
};
use crate::rpc::{RpcError, RpcConnection, Rpc};
#[derive(Clone, Debug)]
enum Authentication {
// If unauthenticated, use a single client
Unauthenticated(Client),
// If authenticated, use a single client which supports being locked and tracks its nonce
// This ensures that if a nonce is requested, another caller doesn't make a request invalidating
// it
Authenticated {
username: String,
password: String,
#[allow(clippy::type_complexity)]
connection: Arc<Mutex<(Option<(WwwAuthenticateHeader, u64)>, Client)>>,
},
}
/// An HTTP(S) transport for the RPC.
///
/// Requires tokio.
#[derive(Clone, Debug)]
pub struct HttpRpc {
authentication: Authentication,
url: String,
}
impl HttpRpc {
fn digest_auth_challenge(
response: &Response,
) -> Result<Option<(WwwAuthenticateHeader, u64)>, RpcError> {
Ok(if let Some(header) = response.headers().get("www-authenticate") {
Some((
digest_auth::parse(
header
.to_str()
.map_err(|_| RpcError::InvalidNode("www-authenticate header wasn't a string"))?,
)
.map_err(|_| RpcError::InvalidNode("invalid digest-auth response"))?,
0,
))
} else {
None
})
}
/// Create a new HTTP(S) RPC connection.
///
/// A daemon requiring authentication can be used via including the username and password in the
/// URL.
pub async fn new(mut url: String) -> Result<Rpc<HttpRpc>, RpcError> {
let authentication = if url.contains('@') {
// Parse out the username and password
let url_clone = url;
let split_url = url_clone.split('@').collect::<Vec<_>>();
if split_url.len() != 2 {
Err(RpcError::ConnectionError("invalid amount of login specifications".to_string()))?;
}
let mut userpass = split_url[0];
url = split_url[1].to_string();
// If there was additionally a protocol string, restore that to the daemon URL
if userpass.contains("://") {
let split_userpass = userpass.split("://").collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::ConnectionError("invalid amount of protocol specifications".to_string()))?;
}
url = split_userpass[0].to_string() + "://" + &url;
userpass = split_userpass[1];
}
let split_userpass = userpass.split(':').collect::<Vec<_>>();
if split_userpass.len() > 2 {
Err(RpcError::ConnectionError("invalid amount of passwords".to_string()))?;
}
let client = Client::without_connection_pool(url.clone())
.map_err(|_| RpcError::ConnectionError("invalid URL".to_string()))?;
// Obtain the initial challenge, which also somewhat validates this connection
let challenge = Self::digest_auth_challenge(
&client
.request(
Request::post(url.clone())
.body(vec![].into())
.map_err(|e| RpcError::ConnectionError(format!("couldn't make request: {e:?}")))?,
)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?,
)?;
Authentication::Authenticated {
username: split_userpass[0].to_string(),
password: split_userpass.get(1).unwrap_or(&"").to_string(),
connection: Arc::new(Mutex::new((challenge, client))),
}
} else {
Authentication::Unauthenticated(Client::with_connection_pool())
};
Ok(Rpc(HttpRpc { authentication, url }))
}
}
impl HttpRpc {
async fn inner_post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError> {
let request_fn = |uri| {
Request::post(uri)
.body(body.clone().into())
.map_err(|e| RpcError::ConnectionError(format!("couldn't make request: {e:?}")))
};
for attempt in 0 .. 2 {
let response = match &self.authentication {
Authentication::Unauthenticated(client) => client
.request(request_fn(self.url.clone() + "/" + route)?)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?,
Authentication::Authenticated { username, password, connection } => {
let mut connection_lock = connection.lock().await;
let mut request = request_fn("/".to_string() + route)?;
// If we don't have an auth challenge, obtain one
if connection_lock.0.is_none() {
connection_lock.0 = Self::digest_auth_challenge(
&connection_lock
.1
.request(request)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?,
)?;
request = request_fn("/".to_string() + route)?;
}
// Insert the challenge response, if we have a challenge
if let Some((challenge, cnonce)) = connection_lock.0.as_mut() {
// Update the cnonce
// Overflow isn't a concern as this is a u64
*cnonce += 1;
let mut context = AuthContext::new_post::<_, _, _, &[u8]>(
username,
password,
"/".to_string() + route,
None,
);
context.set_custom_cnonce(hex::encode(cnonce.to_le_bytes()));
request.headers_mut().insert(
"Authorization",
HeaderValue::from_str(
&challenge
.respond(&context)
.map_err(|_| RpcError::InvalidNode("couldn't respond to digest-auth challenge"))?
.to_header_string(),
)
.unwrap(),
);
}
let response_result = connection_lock
.1
.request(request)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")));
// If the connection entered an error state, drop the cached challenge as challenges are
// per-connection
// We don't need to create a new connection as simple-request will for us
if response_result.is_err() {
connection_lock.0 = None;
}
// If we're not already on our second attempt and:
// A) We had a connection error
// B) We need to re-auth due to this token being stale
// Move to the next loop iteration (retrying all of this)
if (attempt == 0) &&
(response_result.is_err() || {
let response = response_result.as_ref().unwrap();
if response.status() == StatusCode::UNAUTHORIZED {
if let Some(header) = response.headers().get("www-authenticate") {
header
.to_str()
.map_err(|_| RpcError::InvalidNode("www-authenticate header wasn't a string"))?
.contains("stale")
} else {
false
}
} else {
false
}
})
{
// Drop the cached authentication before we do
connection_lock.0 = None;
continue;
}
response_result?
}
};
/*
let length = usize::try_from(
response
.headers()
.get("content-length")
.ok_or(RpcError::InvalidNode("no content-length header"))?
.to_str()
.map_err(|_| RpcError::InvalidNode("non-ascii content-length value"))?
.parse::<u32>()
.map_err(|_| RpcError::InvalidNode("non-u32 content-length value"))?,
)
.unwrap();
// Only pre-allocate 1 MB so a malicious node which claims a content-length of 1 GB actually
// has to send 1 GB of data to cause a 1 GB allocation
let mut res = Vec::with_capacity(length.max(1024 * 1024));
let mut body = response.into_body();
while res.len() < length {
let Some(data) = body.data().await else { break };
res.extend(data.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?.as_ref());
}
*/
let mut res = Vec::with_capacity(128);
response
.body()
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?
.read_to_end(&mut res)
.unwrap();
return Ok(res);
}
unreachable!()
}
}
#[async_trait]
impl RpcConnection for HttpRpc {
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError> {
// TODO: Make this timeout configurable
tokio::time::timeout(core::time::Duration::from_secs(30), self.inner_post(route, body))
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?
}
}

733
coins/monero/src/rpc/mod.rs Normal file
View File

@@ -0,0 +1,733 @@
use core::fmt::Debug;
#[cfg(not(feature = "std"))]
use alloc::boxed::Box;
use std_shims::{
vec::Vec,
io,
string::{String, ToString},
};
use async_trait::async_trait;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use serde::{Serialize, Deserialize, de::DeserializeOwned};
use serde_json::{Value, json};
use crate::{
Protocol,
serialize::*,
transaction::{Input, Timelock, Transaction},
block::Block,
wallet::{FeePriority, Fee},
};
#[cfg(feature = "http-rpc")]
mod http;
#[cfg(feature = "http-rpc")]
pub use http::*;
// Number of blocks the fee estimate will be valid for
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L121
const GRACE_BLOCKS_FOR_FEE_ESTIMATE: u64 = 10;
#[derive(Deserialize, Debug)]
pub struct EmptyResponse {}
#[derive(Deserialize, Debug)]
pub struct JsonRpcResponse<T> {
result: T,
}
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
pruned_as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum RpcError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("connection error ({0})"))]
ConnectionError(String),
#[cfg_attr(feature = "std", error("invalid node ({0})"))]
InvalidNode(&'static str),
#[cfg_attr(feature = "std", error("unsupported protocol version ({0})"))]
UnsupportedProtocol(usize),
#[cfg_attr(feature = "std", error("transactions not found"))]
TransactionsNotFound(Vec<[u8; 32]>),
#[cfg_attr(feature = "std", error("invalid point ({0})"))]
InvalidPoint(String),
#[cfg_attr(feature = "std", error("pruned transaction"))]
PrunedTransaction,
#[cfg_attr(feature = "std", error("invalid transaction ({0:?})"))]
InvalidTransaction([u8; 32]),
#[cfg_attr(feature = "std", error("unexpected fee response"))]
InvalidFee,
#[cfg_attr(feature = "std", error("invalid priority"))]
InvalidPriority,
}
fn rpc_hex(value: &str) -> Result<Vec<u8>, RpcError> {
hex::decode(value).map_err(|_| RpcError::InvalidNode("expected hex wasn't hex"))
}
fn hash_hex(hash: &str) -> Result<[u8; 32], RpcError> {
rpc_hex(hash)?.try_into().map_err(|_| RpcError::InvalidNode("hash wasn't 32-bytes"))
}
fn rpc_point(point: &str) -> Result<EdwardsPoint, RpcError> {
CompressedEdwardsY(
rpc_hex(point)?.try_into().map_err(|_| RpcError::InvalidPoint(point.to_string()))?,
)
.decompress()
.ok_or_else(|| RpcError::InvalidPoint(point.to_string()))
}
// Read an EPEE VarInt, distinct from the VarInts used throughout the rest of the protocol
fn read_epee_vi<R: io::Read>(reader: &mut R) -> io::Result<u64> {
let vi_start = read_byte(reader)?;
let len = match vi_start & 0b11 {
0 => 1,
1 => 2,
2 => 4,
3 => 8,
_ => unreachable!(),
};
let mut vi = u64::from(vi_start >> 2);
for i in 1 .. len {
vi |= u64::from(read_byte(reader)?) << (((i - 1) * 8) + 6);
}
Ok(vi)
}
#[async_trait]
pub trait RpcConnection: Clone + Debug {
/// Perform a POST request to the specified route with the specified body.
///
/// The implementor is left to handle anything such as authentication.
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError>;
}
// TODO: Make this provided methods for RpcConnection?
#[derive(Clone, Debug)]
pub struct Rpc<R: RpcConnection>(R);
impl<R: RpcConnection> Rpc<R> {
/// Perform a RPC call to the specified route with the provided parameters.
///
/// This is NOT a JSON-RPC call. They use a route of "json_rpc" and are available via
/// `json_rpc_call`.
pub async fn rpc_call<Params: Serialize + Debug, Response: DeserializeOwned + Debug>(
&self,
route: &str,
params: Option<Params>,
) -> Result<Response, RpcError> {
let res = self
.0
.post(
route,
if let Some(params) = params {
serde_json::to_string(&params).unwrap().into_bytes()
} else {
vec![]
},
)
.await?;
let res_str = std_shims::str::from_utf8(&res)
.map_err(|_| RpcError::InvalidNode("response wasn't utf-8"))?;
serde_json::from_str(res_str)
.map_err(|_| RpcError::InvalidNode("response wasn't json: {res_str}"))
}
/// Perform a JSON-RPC call with the specified method with the provided parameters
pub async fn json_rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Value>,
) -> Result<Response, RpcError> {
let mut req = json!({ "method": method });
if let Some(params) = params {
req.as_object_mut().unwrap().insert("params".into(), params);
}
Ok(self.rpc_call::<_, JsonRpcResponse<Response>>("json_rpc", Some(req)).await?.result)
}
/// Perform a binary call to the specified route with the provided parameters.
pub async fn bin_call(&self, route: &str, params: Vec<u8>) -> Result<Vec<u8>, RpcError> {
self.0.post(route, params).await
}
/// Get the active blockchain protocol version.
pub async fn get_protocol(&self) -> Result<Protocol, RpcError> {
#[derive(Deserialize, Debug)]
struct ProtocolResponse {
major_version: usize,
}
#[derive(Deserialize, Debug)]
struct LastHeaderResponse {
block_header: ProtocolResponse,
}
Ok(
match self
.json_rpc_call::<LastHeaderResponse>("get_last_block_header", None)
.await?
.block_header
.major_version
{
13 | 14 => Protocol::v14,
15 | 16 => Protocol::v16,
protocol => Err(RpcError::UnsupportedProtocol(protocol))?,
},
)
}
pub async fn get_height(&self) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct HeightResponse {
height: usize,
}
Ok(self.rpc_call::<Option<()>, HeightResponse>("get_height", None).await?.height)
}
pub async fn get_transactions(&self, hashes: &[[u8; 32]]) -> Result<Vec<Transaction>, RpcError> {
if hashes.is_empty() {
return Ok(vec![]);
}
let mut hashes_hex = hashes.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = Vec::with_capacity(hashes.len());
while !hashes_hex.is_empty() {
// Monero errors if more than 100 is requested unless using a non-restricted RPC
const TXS_PER_REQUEST: usize = 100;
let this_count = TXS_PER_REQUEST.min(hashes_hex.len());
let txs: TransactionsResponse = self
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. this_count).collect::<Vec<_>>(),
})),
)
.await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
all_txs.extend(txs.txs);
}
all_txs
.iter()
.enumerate()
.map(|(i, res)| {
let tx = Transaction::read::<&[u8]>(
&mut rpc_hex(if !res.as_hex.is_empty() { &res.as_hex } else { &res.pruned_as_hex })?
.as_ref(),
)
.map_err(|_| match hash_hex(&res.tx_hash) {
Ok(hash) => RpcError::InvalidTransaction(hash),
Err(err) => err,
})?;
// https://github.com/monero-project/monero/issues/8311
if res.as_hex.is_empty() {
match tx.prefix.inputs.first() {
Some(Input::Gen { .. }) => (),
_ => Err(RpcError::PrunedTransaction)?,
}
}
// This does run a few keccak256 hashes, which is pointless if the node is trusted
// In exchange, this provides resilience against invalid/malicious nodes
if tx.hash() != hashes[i] {
Err(RpcError::InvalidNode("replied with transaction wasn't the requested transaction"))?;
}
Ok(tx)
})
.collect()
}
pub async fn get_transaction(&self, tx: [u8; 32]) -> Result<Transaction, RpcError> {
self.get_transactions(&[tx]).await.map(|mut txs| txs.swap_remove(0))
}
/// Get the hash of a block from the node by the block's numbers.
/// This function does not verify the returned block hash is actually for the number in question.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
#[derive(Deserialize, Debug)]
struct BlockHeaderResponse {
hash: String,
}
#[derive(Deserialize, Debug)]
struct BlockHeaderByHeightResponse {
block_header: BlockHeaderResponse,
}
let header: BlockHeaderByHeightResponse =
self.json_rpc_call("get_block_header_by_height", Some(json!({ "height": number }))).await?;
hash_hex(&header.block_header.hash)
}
/// Get a block from the node by its hash.
/// This function does not verify the returned block actually has the hash in question.
pub async fn get_block(&self, hash: [u8; 32]) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await?;
let block = Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref())
.map_err(|_| RpcError::InvalidNode("invalid block"))?;
if block.hash() != hash {
Err(RpcError::InvalidNode("different block than requested (hash)"))?;
}
Ok(block)
}
pub async fn get_block_by_number(&self, number: usize) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "height": number }))).await?;
let block = Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref())
.map_err(|_| RpcError::InvalidNode("invalid block"))?;
// Make sure this is actually the block for this number
match block.miner_tx.prefix.inputs.first() {
Some(Input::Gen(actual)) => {
if usize::try_from(*actual).unwrap() == number {
Ok(block)
} else {
Err(RpcError::InvalidNode("different block than requested (number)"))
}
}
_ => Err(RpcError::InvalidNode("block's miner_tx didn't have an input of kind Input::Gen")),
}
}
pub async fn get_block_transactions(&self, hash: [u8; 32]) -> Result<Vec<Transaction>, RpcError> {
let block = self.get_block(hash).await?;
let mut res = vec![block.miner_tx];
res.extend(self.get_transactions(&block.txs).await?);
Ok(res)
}
pub async fn get_block_transactions_by_number(
&self,
number: usize,
) -> Result<Vec<Transaction>, RpcError> {
self.get_block_transactions(self.get_block_hash(number).await?).await
}
/// Get the output indexes of the specified transaction.
pub async fn get_o_indexes(&self, hash: [u8; 32]) -> Result<Vec<u64>, RpcError> {
/*
TODO: Use these when a suitable epee serde lib exists
#[derive(Serialize, Debug)]
struct Request {
txid: [u8; 32],
}
#[derive(Deserialize, Debug)]
struct OIndexes {
o_indexes: Vec<u64>,
}
*/
// Given the immaturity of Rust epee libraries, this is a homegrown one which is only validated
// to work against this specific function
// Header for EPEE, an 8-byte magic and a version
const EPEE_HEADER: &[u8] = b"\x01\x11\x01\x01\x01\x01\x02\x01\x01";
let mut request = EPEE_HEADER.to_vec();
// Number of fields (shifted over 2 bits as the 2 LSBs are reserved for metadata)
request.push(1 << 2);
// Length of field name
request.push(4);
// Field name
request.extend(b"txid");
// Type of field
request.push(10);
// Length of string, since this byte array is technically a string
request.push(32 << 2);
// The "string"
request.extend(hash);
let indexes_buf = self.bin_call("get_o_indexes.bin", request).await?;
let mut indexes: &[u8] = indexes_buf.as_ref();
(|| {
let mut res = None;
let mut is_okay = false;
if read_bytes::<_, { EPEE_HEADER.len() }>(&mut indexes)? != EPEE_HEADER {
Err(io::Error::other("invalid header"))?;
}
let read_object = |reader: &mut &[u8]| -> io::Result<Vec<u64>> {
let fields = read_byte(reader)? >> 2;
for _ in 0 .. fields {
let name_len = read_byte(reader)?;
let name = read_raw_vec(read_byte, name_len.into(), reader)?;
let type_with_array_flag = read_byte(reader)?;
let kind = type_with_array_flag & (!0x80);
let iters = if type_with_array_flag != kind { read_epee_vi(reader)? } else { 1 };
if (&name == b"o_indexes") && (kind != 5) {
Err(io::Error::other("o_indexes weren't u64s"))?;
}
let f = match kind {
// i64
1 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// i32
2 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// i16
3 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// i8
4 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// u64
5 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// u32
6 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// u16
7 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// u8
8 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// double
9 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// string, or any collection of bytes
10 => |reader: &mut &[u8]| {
let len = read_epee_vi(reader)?;
read_raw_vec(
read_byte,
len.try_into().map_err(|_| io::Error::other("u64 length exceeded usize"))?,
reader,
)
},
// bool
11 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// object, errors here as it shouldn't be used on this call
12 => {
|_: &mut &[u8]| Err(io::Error::other("node used object in reply to get_o_indexes"))
}
// array, so far unused
13 => |_: &mut &[u8]| Err(io::Error::other("node used the unused array type")),
_ => |_: &mut &[u8]| Err(io::Error::other("node used an invalid type")),
};
let mut bytes_res = vec![];
for _ in 0 .. iters {
bytes_res.push(f(reader)?);
}
let mut actual_res = Vec::with_capacity(bytes_res.len());
match name.as_slice() {
b"o_indexes" => {
for o_index in bytes_res {
actual_res.push(u64::from_le_bytes(
o_index
.try_into()
.map_err(|_| io::Error::other("node didn't provide 8 bytes for a u64"))?,
));
}
res = Some(actual_res);
}
b"status" => {
if bytes_res
.first()
.ok_or_else(|| io::Error::other("status wasn't a string"))?
.as_slice() !=
b"OK"
{
// TODO: Better handle non-OK responses
Err(io::Error::other("response wasn't OK"))?;
}
is_okay = true;
}
_ => continue,
}
if is_okay && res.is_some() {
break;
}
}
// Didn't return a response with a status
// (if the status wasn't okay, we would've already errored)
if !is_okay {
Err(io::Error::other("response didn't contain a status"))?;
}
// If the Vec was empty, it would've been omitted, hence the unwrap_or
// TODO: Test against a 0-output TX, such as the ones found in block 202612
Ok(res.unwrap_or(vec![]))
};
read_object(&mut indexes)
})()
.map_err(|_| RpcError::InvalidNode("invalid binary response"))
}
/// Get the output distribution, from the specified height to the specified height (both
/// inclusive).
pub async fn get_output_distribution(
&self,
from: usize,
to: usize,
) -> Result<Vec<u64>, RpcError> {
#[derive(Deserialize, Debug)]
struct Distribution {
distribution: Vec<u64>,
}
#[derive(Deserialize, Debug)]
struct Distributions {
distributions: Vec<Distribution>,
}
let mut distributions: Distributions = self
.json_rpc_call(
"get_output_distribution",
Some(json!({
"binary": false,
"amounts": [0],
"cumulative": true,
"from_height": from,
"to_height": to,
})),
)
.await?;
Ok(distributions.distributions.swap_remove(0).distribution)
}
/// Get the specified outputs from the RingCT (zero-amount) pool, but only return them if their
/// timelock has been satisfied.
///
/// The timelock being satisfied is distinct from being free of the 10-block lock applied to all
/// Monero transactions.
pub async fn get_unlocked_outputs(
&self,
indexes: &[u64],
height: usize,
) -> Result<Vec<Option<[EdwardsPoint; 2]>>, RpcError> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
txid: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = self
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": 0,
"index": o
})).collect::<Vec<_>>()
})),
)
.await?;
let txs = self
.get_transactions(
&outs.outs.iter().map(|out| hash_hex(&out.txid)).collect::<Result<Vec<_>, _>>()?,
)
.await?;
// TODO: https://github.com/serai-dex/serai/issues/104
outs
.outs
.iter()
.enumerate()
.map(|(i, out)| {
// Allow keys to be invalid, though if they are, return None to trigger selection of a new
// decoy
// Only valid keys can be used in CLSAG proofs, hence the need for re-selection, yet
// invalid keys may honestly exist on the blockchain
// Only a recent hard fork checked output keys were valid points
let Some(key) = CompressedEdwardsY(
rpc_hex(&out.key)?.try_into().map_err(|_| RpcError::InvalidNode("non-32-byte point"))?,
)
.decompress() else {
return Ok(None);
};
Ok(
Some([key, rpc_point(&out.mask)?])
.filter(|_| Timelock::Block(height) >= txs[i].prefix.timelock),
)
})
.collect()
}
async fn get_fee_v14(&self, priority: FeePriority) -> Result<Fee, RpcError> {
#[derive(Deserialize, Debug)]
struct FeeResponseV14 {
status: String,
fee: u64,
quantization_mask: u64,
}
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7569-L7584
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7660-L7661
let priority_idx =
usize::try_from(if priority.fee_priority() == 0 { 1 } else { priority.fee_priority() - 1 })
.map_err(|_| RpcError::InvalidPriority)?;
let multipliers = [1, 5, 25, 1000];
if priority_idx >= multipliers.len() {
// though not an RPC error, it seems sensible to treat as such
Err(RpcError::InvalidPriority)?;
}
let fee_multiplier = multipliers[priority_idx];
let res: FeeResponseV14 = self
.json_rpc_call(
"get_fee_estimate",
Some(json!({ "grace_blocks": GRACE_BLOCKS_FOR_FEE_ESTIMATE })),
)
.await?;
if res.status != "OK" {
Err(RpcError::InvalidFee)?;
}
Ok(Fee { per_weight: res.fee * fee_multiplier, mask: res.quantization_mask })
}
/// Get the currently estimated fee from the node.
///
/// This may be manipulated to unsafe levels and MUST be sanity checked.
// TODO: Take a sanity check argument
pub async fn get_fee(&self, protocol: Protocol, priority: FeePriority) -> Result<Fee, RpcError> {
// TODO: Implement wallet2's adjust_priority which by default automatically uses a lower
// priority than provided depending on the backlog in the pool
if protocol.v16_fee() {
#[derive(Deserialize, Debug)]
struct FeeResponse {
status: String,
fees: Vec<u64>,
quantization_mask: u64,
}
let res: FeeResponse = self
.json_rpc_call(
"get_fee_estimate",
Some(json!({ "grace_blocks": GRACE_BLOCKS_FOR_FEE_ESTIMATE })),
)
.await?;
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7615-L7620
let priority_idx = usize::try_from(if priority.fee_priority() >= 4 {
3
} else {
priority.fee_priority().saturating_sub(1)
})
.map_err(|_| RpcError::InvalidPriority)?;
if res.status != "OK" {
Err(RpcError::InvalidFee)
} else if priority_idx >= res.fees.len() {
Err(RpcError::InvalidPriority)
} else {
Ok(Fee { per_weight: res.fees[priority_idx], mask: res.quantization_mask })
}
} else {
self.get_fee_v14(priority).await
}
}
pub async fn publish_transaction(&self, tx: &Transaction) -> Result<(), RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SendRawResponse {
status: String,
double_spend: bool,
fee_too_low: bool,
invalid_input: bool,
invalid_output: bool,
low_mixin: bool,
not_relayed: bool,
overspend: bool,
too_big: bool,
too_few_outputs: bool,
reason: String,
}
let res: SendRawResponse = self
.rpc_call("send_raw_transaction", Some(json!({ "tx_as_hex": hex::encode(tx.serialize()) })))
.await?;
if res.status != "OK" {
Err(RpcError::InvalidTransaction(tx.hash()))?;
}
Ok(())
}
// TODO: Take &Address, not &str?
pub async fn generate_blocks(
&self,
address: &str,
block_count: usize,
) -> Result<Vec<[u8; 32]>, RpcError> {
#[derive(Debug, Deserialize)]
struct BlocksResponse {
blocks: Vec<String>,
}
let block_strs = self
.json_rpc_call::<BlocksResponse>(
"generateblocks",
Some(json!({
"wallet_address": address,
"amount_of_blocks": block_count
})),
)
.await?
.blocks;
let mut blocks = Vec::with_capacity(block_strs.len());
for block in block_strs {
blocks.push(hash_hex(&block)?);
}
Ok(blocks)
}
}

View File

@@ -1,4 +1,8 @@
use std::io::{self, Read, Write};
use core::fmt::Debug;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use curve25519_dalek::{
scalar::Scalar,
@@ -7,16 +11,27 @@ use curve25519_dalek::{
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
pub(crate) fn varint_len(varint: usize) -> usize {
((usize::try_from(usize::BITS - varint.leading_zeros()).unwrap().saturating_sub(1)) / 7) + 1
mod sealed {
pub trait VarInt: TryInto<u64> + TryFrom<u64> + Copy {}
impl VarInt for u8 {}
impl VarInt for u32 {}
impl VarInt for u64 {}
impl VarInt for usize {}
}
// This will panic if the VarInt exceeds u64::MAX
pub(crate) fn varint_len<U: sealed::VarInt>(varint: U) -> usize {
let varint_u64: u64 = varint.try_into().map_err(|_| "varint exceeded u64").unwrap();
((usize::try_from(u64::BITS - varint_u64.leading_zeros()).unwrap().saturating_sub(1)) / 7) + 1
}
pub(crate) fn write_byte<W: Write>(byte: &u8, w: &mut W) -> io::Result<()> {
w.write_all(&[*byte])
}
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {
let mut varint = *varint;
// This will panic if the VarInt exceeds u64::MAX
pub(crate) fn write_varint<W: Write, U: sealed::VarInt>(varint: &U, w: &mut W) -> io::Result<()> {
let mut varint: u64 = (*varint).try_into().map_err(|_| "varint exceeded u64").unwrap();
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
@@ -53,7 +68,7 @@ pub(crate) fn write_vec<T, W: Write, F: Fn(&T, &mut W) -> io::Result<()>>(
values: &[T],
w: &mut W,
) -> io::Result<()> {
write_varint(&values.len().try_into().unwrap(), w)?;
write_varint(&values.len(), w)?;
write_raw_vec(f, values, w)
}
@@ -67,31 +82,35 @@ pub(crate) fn read_byte<R: Read>(r: &mut R) -> io::Result<u8> {
Ok(read_bytes::<_, 1>(r)?[0])
}
pub(crate) fn read_u64<R: Read>(r: &mut R) -> io::Result<u64> {
read_bytes(r).map(u64::from_le_bytes)
pub(crate) fn read_u16<R: Read>(r: &mut R) -> io::Result<u16> {
read_bytes(r).map(u16::from_le_bytes)
}
pub(crate) fn read_u32<R: Read>(r: &mut R) -> io::Result<u32> {
read_bytes(r).map(u32::from_le_bytes)
}
pub(crate) fn read_varint<R: Read>(r: &mut R) -> io::Result<u64> {
pub(crate) fn read_u64<R: Read>(r: &mut R) -> io::Result<u64> {
read_bytes(r).map(u64::from_le_bytes)
}
pub(crate) fn read_varint<R: Read, U: sealed::VarInt>(r: &mut R) -> io::Result<U> {
let mut bits = 0;
let mut res = 0;
while {
let b = read_byte(r)?;
if (bits != 0) && (b == 0) {
Err(io::Error::new(io::ErrorKind::Other, "non-canonical varint"))?;
Err(io::Error::other("non-canonical varint"))?;
}
if ((bits + 7) > 64) && (b >= (1 << (64 - bits))) {
Err(io::Error::new(io::ErrorKind::Other, "varint overflow"))?;
Err(io::Error::other("varint overflow"))?;
}
res += u64::from(b & (!VARINT_CONTINUATION_MASK)) << bits;
bits += 7;
b & VARINT_CONTINUATION_MASK == VARINT_CONTINUATION_MASK
} {}
Ok(res)
res.try_into().map_err(|_| io::Error::other("VarInt does not fit into integer type"))
}
// All scalar fields supported by monero-serai are checked to be canonical for valid transactions
@@ -101,8 +120,8 @@ pub(crate) fn read_varint<R: Read>(r: &mut R) -> io::Result<u64> {
// https://github.com/monero-project/monero/issues/8438, where some scalars had an archaic
// reduction applied
pub(crate) fn read_scalar<R: Read>(r: &mut R) -> io::Result<Scalar> {
Scalar::from_canonical_bytes(read_bytes(r)?)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "unreduced scalar"))
Option::from(Scalar::from_canonical_bytes(read_bytes(r)?))
.ok_or_else(|| io::Error::other("unreduced scalar"))
}
pub(crate) fn read_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
@@ -111,14 +130,14 @@ pub(crate) fn read_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
.decompress()
// Ban points which are either unreduced or -0
.filter(|point| point.compress().to_bytes() == bytes)
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid point"))
.ok_or_else(|| io::Error::other("invalid point"))
}
pub(crate) fn read_torsion_free_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
read_point(r)
.ok()
.filter(|point| point.is_torsion_free())
.ok_or_else(|| io::Error::new(io::ErrorKind::Other, "invalid point"))
.filter(EdwardsPoint::is_torsion_free)
.ok_or_else(|| io::Error::other("invalid point"))
}
pub(crate) fn read_raw_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
@@ -133,9 +152,16 @@ pub(crate) fn read_raw_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
Ok(res)
}
pub(crate) fn read_array<R: Read, T: Debug, F: Fn(&mut R) -> io::Result<T>, const N: usize>(
f: F,
r: &mut R,
) -> io::Result<[T; N]> {
read_raw_vec(f, N, r).map(|vec| vec.try_into().unwrap())
}
pub(crate) fn read_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
r: &mut R,
) -> io::Result<Vec<T>> {
read_raw_vec(f, read_varint(r)?.try_into().unwrap(), r)
read_raw_vec(f, read_varint(r)?, r)
}

View File

@@ -33,9 +33,9 @@ fn standard_address() {
let addr = MoneroAddress::from_str(Network::Mainnet, STANDARD).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Standard);
assert!(!addr.meta.kind.subaddress());
assert!(!addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), None);
assert!(!addr.meta.kind.guaranteed());
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SPEND);
assert_eq!(addr.view.compress().to_bytes(), VIEW);
assert_eq!(addr.to_string(), STANDARD);
@@ -46,9 +46,9 @@ fn integrated_address() {
let addr = MoneroAddress::from_str(Network::Mainnet, INTEGRATED).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Integrated(PAYMENT_ID));
assert!(!addr.meta.kind.subaddress());
assert!(!addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), Some(PAYMENT_ID));
assert!(!addr.meta.kind.guaranteed());
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SPEND);
assert_eq!(addr.view.compress().to_bytes(), VIEW);
assert_eq!(addr.to_string(), INTEGRATED);
@@ -59,9 +59,9 @@ fn subaddress() {
let addr = MoneroAddress::from_str(Network::Mainnet, SUBADDRESS).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Subaddress);
assert!(addr.meta.kind.subaddress());
assert!(addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), None);
assert!(!addr.meta.kind.guaranteed());
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SUB_SPEND);
assert_eq!(addr.view.compress().to_bytes(), SUB_VIEW);
assert_eq!(addr.to_string(), SUBADDRESS);
@@ -73,8 +73,8 @@ fn featured() {
[(Network::Mainnet, 'C'), (Network::Testnet, 'K'), (Network::Stagenet, 'F')]
{
for _ in 0 .. 100 {
let spend = &random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE;
let view = &random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE;
let spend = &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE;
let view = &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE;
for features in 0 .. (1 << 3) {
const SUBADDRESS_FEATURE_BIT: u8 = 1;
@@ -100,9 +100,9 @@ fn featured() {
assert_eq!(addr.spend, spend);
assert_eq!(addr.view, view);
assert_eq!(addr.subaddress(), subaddress);
assert_eq!(addr.is_subaddress(), subaddress);
assert_eq!(addr.payment_id(), payment_id);
assert_eq!(addr.guaranteed(), guaranteed);
assert_eq!(addr.is_guaranteed(), guaranteed);
}
}
}
@@ -142,19 +142,23 @@ fn featured_vectors() {
}
_ => panic!("Unknown network"),
};
let spend =
CompressedEdwardsY::from_slice(&hex::decode(vector.spend).unwrap()).decompress().unwrap();
let view =
CompressedEdwardsY::from_slice(&hex::decode(vector.view).unwrap()).decompress().unwrap();
let spend = CompressedEdwardsY::from_slice(&hex::decode(vector.spend).unwrap())
.unwrap()
.decompress()
.unwrap();
let view = CompressedEdwardsY::from_slice(&hex::decode(vector.view).unwrap())
.unwrap()
.decompress()
.unwrap();
let addr = MoneroAddress::from_str(network, &vector.address).unwrap();
assert_eq!(addr.spend, spend);
assert_eq!(addr.view, view);
assert_eq!(addr.subaddress(), vector.subaddress);
assert_eq!(addr.is_subaddress(), vector.subaddress);
assert_eq!(vector.integrated, vector.payment_id.is_some());
assert_eq!(addr.payment_id(), vector.payment_id);
assert_eq!(addr.guaranteed(), vector.guaranteed);
assert_eq!(addr.is_guaranteed(), vector.guaranteed);
assert_eq!(
MoneroAddress::new(

View File

@@ -1,5 +1,5 @@
use hex_literal::hex;
use rand::rngs::OsRng;
use rand_core::OsRng;
use curve25519_dalek::{scalar::Scalar, edwards::CompressedEdwardsY};
use multiexp::BatchVerifier;
@@ -9,6 +9,8 @@ use crate::{
ringct::bulletproofs::{Bulletproofs, original::OriginalStruct},
};
mod plus;
#[test]
fn bulletproofs_vector() {
let scalar = |scalar| Scalar::from_canonical_bytes(scalar).unwrap();
@@ -62,7 +64,7 @@ macro_rules! bulletproofs_tests {
fn $name() {
// Create Bulletproofs for all possible output quantities
let mut verifier = BatchVerifier::new(16);
for i in 1 .. 17 {
for i in 1 ..= 16 {
let commitments = (1 ..= i)
.map(|i| Commitment::new(random_scalar(&mut OsRng), u64::try_from(i).unwrap()))
.collect::<Vec<_>>();
@@ -81,7 +83,7 @@ macro_rules! bulletproofs_tests {
// Check Bulletproofs errors if we try to prove for too many outputs
let mut commitments = vec![];
for _ in 0 .. 17 {
commitments.push(Commitment::new(Scalar::zero(), 0));
commitments.push(Commitment::new(Scalar::ZERO, 0));
}
assert!(Bulletproofs::prove(&mut OsRng, &commitments, $plus).is_err());
}

View File

@@ -0,0 +1,30 @@
use rand_core::{RngCore, OsRng};
use multiexp::BatchVerifier;
use group::ff::Field;
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::{
Commitment,
ringct::bulletproofs::plus::aggregate_range_proof::{
AggregateRangeStatement, AggregateRangeWitness,
},
};
#[test]
fn test_aggregate_range_proof() {
let mut verifier = BatchVerifier::new(16);
for m in 1 ..= 16 {
let mut commitments = vec![];
for _ in 0 .. m {
commitments.push(Commitment::new(*Scalar::random(&mut OsRng), OsRng.next_u64()));
}
let commitment_points = commitments.iter().map(|com| EdwardsPoint(com.calculate())).collect();
let statement = AggregateRangeStatement::new(commitment_points).unwrap();
let witness = AggregateRangeWitness::new(&commitments).unwrap();
let proof = statement.clone().prove(&mut OsRng, witness).unwrap();
statement.verify(&mut OsRng, &mut verifier, (), proof);
}
assert!(verifier.verify_vartime());
}

View File

@@ -0,0 +1,4 @@
#[cfg(test)]
mod weighted_inner_product;
#[cfg(test)]
mod aggregate_range_proof;

View File

@@ -0,0 +1,82 @@
// The inner product relation is P = sum(g_bold * a, h_bold * b, g * (a * y * b), h * alpha)
use rand_core::OsRng;
use multiexp::BatchVerifier;
use group::{ff::Field, Group};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::ringct::bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators,
weighted_inner_product::{WipStatement, WipWitness},
weighted_inner_product,
};
#[test]
fn test_zero_weighted_inner_product() {
#[allow(non_snake_case)]
let P = EdwardsPoint::identity();
let y = Scalar::random(&mut OsRng);
let generators = Generators::new().reduce(1);
let statement = WipStatement::new(generators, P, y);
let witness = WipWitness::new(ScalarVector::new(1), ScalarVector::new(1), Scalar::ZERO).unwrap();
let transcript = Scalar::random(&mut OsRng);
let proof = statement.clone().prove(&mut OsRng, transcript, witness).unwrap();
let mut verifier = BatchVerifier::new(1);
statement.verify(&mut OsRng, &mut verifier, (), transcript, proof);
assert!(verifier.verify_vartime());
}
#[test]
fn test_weighted_inner_product() {
// P = sum(g_bold * a, h_bold * b, g * (a * y * b), h * alpha)
let mut verifier = BatchVerifier::new(6);
let generators = Generators::new();
for i in [1, 2, 4, 8, 16, 32] {
let generators = generators.reduce(i);
let g = generators.g();
let h = generators.h();
assert_eq!(generators.len(), i);
let mut g_bold = vec![];
let mut h_bold = vec![];
for i in 0 .. i {
g_bold.push(generators.generator(GeneratorsList::GBold1, i));
h_bold.push(generators.generator(GeneratorsList::HBold1, i));
}
let g_bold = PointVector(g_bold);
let h_bold = PointVector(h_bold);
let mut a = ScalarVector::new(i);
let mut b = ScalarVector::new(i);
let alpha = Scalar::random(&mut OsRng);
let y = Scalar::random(&mut OsRng);
let mut y_vec = ScalarVector::new(g_bold.len());
y_vec[0] = y;
for i in 1 .. y_vec.len() {
y_vec[i] = y_vec[i - 1] * y;
}
for i in 0 .. i {
a[i] = Scalar::random(&mut OsRng);
b[i] = Scalar::random(&mut OsRng);
}
#[allow(non_snake_case)]
let P = g_bold.multiexp(&a) +
h_bold.multiexp(&b) +
(g * weighted_inner_product(&a, &b, &y_vec)) +
(h * alpha);
let statement = WipStatement::new(generators, P, y);
let witness = WipWitness::new(a, b, alpha).unwrap();
let transcript = Scalar::random(&mut OsRng);
let proof = statement.clone().prove(&mut OsRng, transcript, witness).unwrap();
statement.verify(&mut OsRng, &mut verifier, (), transcript, proof);
}
assert!(verifier.verify_vartime());
}

View File

@@ -24,7 +24,10 @@ use crate::{
use crate::ringct::clsag::{ClsagDetails, ClsagMultisig};
#[cfg(feature = "multisig")]
use frost::tests::{key_gen, algorithm_machines, sign};
use frost::{
Participant,
tests::{key_gen, algorithm_machines, sign},
};
const RING_LEN: u64 = 11;
const AMOUNT: u64 = 1337;
@@ -37,7 +40,7 @@ fn clsag() {
for real in 0 .. RING_LEN {
let msg = [1; 32];
let mut secrets = (Zeroizing::new(Scalar::zero()), Scalar::zero());
let mut secrets = (Zeroizing::new(Scalar::ZERO), Scalar::ZERO);
let mut ring = vec![];
for i in 0 .. RING_LEN {
let dest = Zeroizing::new(random_scalar(&mut OsRng));
@@ -50,7 +53,7 @@ fn clsag() {
amount = OsRng.next_u64();
}
ring
.push([dest.deref() * &ED25519_BASEPOINT_TABLE, Commitment::new(mask, amount).calculate()]);
.push([dest.deref() * ED25519_BASEPOINT_TABLE, Commitment::new(mask, amount).calculate()]);
}
let image = generate_key_image(&secrets.0);
@@ -63,7 +66,7 @@ fn clsag() {
Commitment::new(secrets.1, AMOUNT),
Decoys {
i: u8::try_from(real).unwrap(),
offsets: (1 ..= RING_LEN).into_iter().collect(),
offsets: (1 ..= RING_LEN).collect(),
ring: ring.clone(),
},
)
@@ -89,11 +92,11 @@ fn clsag_multisig() {
let mask;
let amount;
if i != u64::from(RING_INDEX) {
dest = &random_scalar(&mut OsRng) * &ED25519_BASEPOINT_TABLE;
dest = &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE;
mask = random_scalar(&mut OsRng);
amount = OsRng.next_u64();
} else {
dest = keys[&1].group_key().0;
dest = keys[&Participant::new(1).unwrap()].group_key().0;
mask = randomness;
amount = AMOUNT;
}
@@ -103,15 +106,11 @@ fn clsag_multisig() {
let mask_sum = random_scalar(&mut OsRng);
let algorithm = ClsagMultisig::new(
RecommendedTranscript::new(b"Monero Serai CLSAG Test"),
keys[&1].group_key().0,
keys[&Participant::new(1).unwrap()].group_key().0,
Arc::new(RwLock::new(Some(ClsagDetails::new(
ClsagInput::new(
Commitment::new(randomness, AMOUNT),
Decoys {
i: RING_INDEX,
offsets: (1 ..= RING_LEN).into_iter().collect(),
ring: ring.clone(),
},
Decoys { i: RING_INDEX, offsets: (1 ..= RING_LEN).collect(), ring: ring.clone() },
)
.unwrap(),
mask_sum,

View File

@@ -1,3 +1,5 @@
mod unreduced_scalar;
mod clsag;
mod bulletproofs;
mod address;
mod seed;

View File

@@ -0,0 +1,382 @@
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::scalar::Scalar;
use crate::{
hash,
wallet::seed::{
Seed, SeedType,
classic::{self, trim_by_lang},
polyseed,
},
};
#[test]
fn test_classic_seed() {
struct Vector {
language: classic::Language,
seed: String,
spend: String,
view: String,
}
let vectors = [
Vector {
language: classic::Language::Chinese,
seed: "摇 曲 艺 武 滴 然 效 似 赏 式 祥 歌 买 疑 小 碧 堆 博 键 房 鲜 悲 付 喷 武".into(),
spend: "a5e4fff1706ef9212993a69f246f5c95ad6d84371692d63e9bb0ea112a58340d".into(),
view: "1176c43ce541477ea2f3ef0b49b25112b084e26b8a843e1304ac4677b74cdf02".into(),
},
Vector {
language: classic::Language::English,
seed: "washing thirsty occur lectures tuesday fainted toxic adapt \
abnormal memoir nylon mostly building shrugged online ember northern \
ruby woes dauntless boil family illness inroads northern"
.into(),
spend: "c0af65c0dd837e666b9d0dfed62745f4df35aed7ea619b2798a709f0fe545403".into(),
view: "513ba91c538a5a9069e0094de90e927c0cd147fa10428ce3ac1afd49f63e3b01".into(),
},
Vector {
language: classic::Language::Dutch,
seed: "setwinst riphagen vimmetje extase blief tuitelig fuiven meifeest \
ponywagen zesmaal ripdeal matverf codetaal leut ivoor rotten \
wisgerhof winzucht typograaf atrium rein zilt traktaat verzaagd setwinst"
.into(),
spend: "e2d2873085c447c2bc7664222ac8f7d240df3aeac137f5ff2022eaa629e5b10a".into(),
view: "eac30b69477e3f68093d131c7fd961564458401b07f8c87ff8f6030c1a0c7301".into(),
},
Vector {
language: classic::Language::French,
seed: "poids vaseux tarte bazar poivre effet entier nuance \
sensuel ennui pacte osselet poudre battre alibi mouton \
stade paquet pliage gibier type question position projet pliage"
.into(),
spend: "2dd39ff1a4628a94b5c2ec3e42fb3dfe15c2b2f010154dc3b3de6791e805b904".into(),
view: "6725b32230400a1032f31d622b44c3a227f88258939b14a7c72e00939e7bdf0e".into(),
},
Vector {
language: classic::Language::Spanish,
seed: "minero ocupar mirar evadir octubre cal logro miope \
opaco disco ancla litio clase cuello nasal clase \
fiar avance deseo mente grumo negro cordón croqueta clase"
.into(),
spend: "ae2c9bebdddac067d73ec0180147fc92bdf9ac7337f1bcafbbe57dd13558eb02".into(),
view: "18deafb34d55b7a43cae2c1c1c206a3c80c12cc9d1f84640b484b95b7fec3e05".into(),
},
Vector {
language: classic::Language::German,
seed: "Kaliber Gabelung Tapir Liveband Favorit Specht Enklave Nabel \
Jupiter Foliant Chronik nisten löten Vase Aussage Rekord \
Yeti Gesetz Eleganz Alraune Künstler Almweide Jahr Kastanie Almweide"
.into(),
spend: "79801b7a1b9796856e2397d862a113862e1fdc289a205e79d8d70995b276db06".into(),
view: "99f0ec556643bd9c038a4ed86edcb9c6c16032c4622ed2e000299d527a792701".into(),
},
Vector {
language: classic::Language::Italian,
seed: "cavo pancetta auto fulmine alleanza filmato diavolo prato \
forzare meritare litigare lezione segreto evasione votare buio \
licenza cliente dorso natale crescere vento tutelare vetta evasione"
.into(),
spend: "5e7fd774eb00fa5877e2a8b4dc9c7ffe111008a3891220b56a6e49ac816d650a".into(),
view: "698a1dce6018aef5516e82ca0cb3e3ec7778d17dfb41a137567bfa2e55e63a03".into(),
},
Vector {
language: classic::Language::Portuguese,
seed: "agito eventualidade onus itrio holograma sodomizar objetos dobro \
iugoslavo bcrepuscular odalisca abjeto iuane darwinista eczema acetona \
cibernetico hoquei gleba driver buffer azoto megera nogueira agito"
.into(),
spend: "13b3115f37e35c6aa1db97428b897e584698670c1b27854568d678e729200c0f".into(),
view: "ad1b4fd35270f5f36c4da7166672b347e75c3f4d41346ec2a06d1d0193632801".into(),
},
Vector {
language: classic::Language::Japanese,
seed: "ぜんぶ どうぐ おたがい せんきょ おうじ そんちょう じゅしん いろえんぴつ \
かほう つかれる えらぶ にちじょう くのう にちようび ぬまえび さんきゃく \
おおや ちぬき うすめる いがく せつでん さうな すいえい せつだん おおや"
.into(),
spend: "c56e895cdb13007eda8399222974cdbab493640663804b93cbef3d8c3df80b0b".into(),
view: "6c3634a313ec2ee979d565c33888fd7c3502d696ce0134a8bc1a2698c7f2c508".into(),
},
Vector {
language: classic::Language::Russian,
seed: "шатер икра нация ехать получать инерция доза реальный \
рыжий таможня лопата душа веселый клетка атлас лекция \
обгонять паек наивный лыжный дурак стать ежик задача паек"
.into(),
spend: "7cb5492df5eb2db4c84af20766391cd3e3662ab1a241c70fc881f3d02c381f05".into(),
view: "fcd53e41ec0df995ab43927f7c44bc3359c93523d5009fb3f5ba87431d545a03".into(),
},
Vector {
language: classic::Language::Esperanto,
seed: "ukazo klini peco etikedo fabriko imitado onklino urino \
pudro incidento kumuluso ikono smirgi hirundo uretro krii \
sparkado super speciala pupo alpinisto cvana vokegi zombio fabriko"
.into(),
spend: "82ebf0336d3b152701964ed41df6b6e9a035e57fc98b84039ed0bd4611c58904".into(),
view: "cd4d120e1ea34360af528f6a3e6156063312d9cefc9aa6b5218d366c0ed6a201".into(),
},
Vector {
language: classic::Language::Lojban,
seed: "jetnu vensa julne xrotu xamsi julne cutci dakli \
mlatu xedja muvgau palpi xindo sfubu ciste cinri \
blabi darno dembi janli blabi fenki bukpu burcu blabi"
.into(),
spend: "e4f8c6819ab6cf792cebb858caabac9307fd646901d72123e0367ebc0a79c200".into(),
view: "c806ce62bafaa7b2d597f1a1e2dbe4a2f96bfd804bf6f8420fc7f4a6bd700c00".into(),
},
Vector {
language: classic::Language::EnglishOld,
seed: "glorious especially puff son moment add youth nowhere \
throw glide grip wrong rhythm consume very swear \
bitter heavy eventually begin reason flirt type unable"
.into(),
spend: "647f4765b66b636ff07170ab6280a9a6804dfbaf19db2ad37d23be024a18730b".into(),
view: "045da65316a906a8c30046053119c18020b07a7a3a6ef5c01ab2a8755416bd02".into(),
},
];
for vector in vectors {
let trim_seed = |seed: &str| {
seed
.split_whitespace()
.map(|word| trim_by_lang(word, vector.language))
.collect::<Vec<_>>()
.join(" ")
};
// Test against Monero
{
let seed = Seed::from_string(Zeroizing::new(vector.seed.clone())).unwrap();
assert_eq!(seed, Seed::from_string(Zeroizing::new(trim_seed(&vector.seed))).unwrap());
let spend: [u8; 32] = hex::decode(vector.spend).unwrap().try_into().unwrap();
// For classical seeds, Monero directly uses the entropy as a spend key
assert_eq!(
Option::<Scalar>::from(Scalar::from_canonical_bytes(*seed.entropy())),
Option::<Scalar>::from(Scalar::from_canonical_bytes(spend)),
);
let view: [u8; 32] = hex::decode(vector.view).unwrap().try_into().unwrap();
// Monero then derives the view key as H(spend)
assert_eq!(
Scalar::from_bytes_mod_order(hash(&spend)),
Scalar::from_canonical_bytes(view).unwrap()
);
assert_eq!(
Seed::from_entropy(SeedType::Classic(vector.language), Zeroizing::new(spend), None)
.unwrap(),
seed
);
}
// Test against ourselves
{
let seed = Seed::new(&mut OsRng, SeedType::Classic(vector.language));
assert_eq!(seed, Seed::from_string(Zeroizing::new(trim_seed(&seed.to_string()))).unwrap());
assert_eq!(
seed,
Seed::from_entropy(SeedType::Classic(vector.language), seed.entropy(), None).unwrap()
);
assert_eq!(seed, Seed::from_string(seed.to_string()).unwrap());
}
}
}
#[test]
fn test_polyseed() {
struct Vector {
language: polyseed::Language,
seed: String,
entropy: String,
birthday: u64,
has_prefix: bool,
has_accent: bool,
}
let vectors = [
Vector {
language: polyseed::Language::English,
seed: "raven tail swear infant grief assist regular lamp \
duck valid someone little harsh puppy airport language"
.into(),
entropy: "dd76e7359a0ded37cd0ff0f3c829a5ae01673300000000000000000000000000".into(),
birthday: 1638446400,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Spanish,
seed: "eje fin parte célebre tabú pestaña lienzo puma \
prisión hora regalo lengua existir lápiz lote sonoro"
.into(),
entropy: "5a2b02df7db21fcbe6ec6df137d54c7b20fd2b00000000000000000000000000".into(),
birthday: 3118651200,
has_prefix: true,
has_accent: true,
},
Vector {
language: polyseed::Language::French,
seed: "valable arracher décaler jeudi amusant dresser mener épaissir risible \
prouesse réserve ampleur ajuster muter caméra enchère"
.into(),
entropy: "11cfd870324b26657342c37360c424a14a050b00000000000000000000000000".into(),
birthday: 1679314966,
has_prefix: true,
has_accent: true,
},
Vector {
language: polyseed::Language::Italian,
seed: "caduco midollo copione meninge isotopo illogico riflesso tartaruga fermento \
olandese normale tristezza episodio voragine forbito achille"
.into(),
entropy: "7ecc57c9b4652d4e31428f62bec91cfd55500600000000000000000000000000".into(),
birthday: 1679316358,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Portuguese,
seed: "caverna custear azedo adeus senador apertada sedoso omitir \
sujeito aurora videira molho cartaz gesso dentista tapar"
.into(),
entropy: "45473063711376cae38f1b3eba18c874124e1d00000000000000000000000000".into(),
birthday: 1679316657,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Czech,
seed: "usmrtit nora dotaz komunita zavalit funkce mzda sotva akce \
vesta kabel herna stodola uvolnit ustrnout email"
.into(),
entropy: "7ac8a4efd62d9c3c4c02e350d32326df37821c00000000000000000000000000".into(),
birthday: 1679316898,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Korean,
seed: "전망 선풍기 국제 무궁화 설사 기름 이론적 해안 절망 예선 \
지우개 보관 절망 말기 시각 귀신"
.into(),
entropy: "684663fda420298f42ed94b2c512ed38ddf12b00000000000000000000000000".into(),
birthday: 1679317073,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::Japanese,
seed: "うちあわせ ちつじょ つごう しはい けんこう とおる てみやげ はんとし たんとう \
といれ おさない おさえる むかう ぬぐう なふだ せまる"
.into(),
entropy: "94e6665518a6286c6e3ba508a2279eb62b771f00000000000000000000000000".into(),
birthday: 1679318722,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::ChineseTraditional,
seed: "亂 挖 斤 柄 代 圈 枝 轄 魯 論 函 開 勘 番 榮 壁".into(),
entropy: "b1594f585987ab0fd5a31da1f0d377dae5283f00000000000000000000000000".into(),
birthday: 1679426433,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::ChineseSimplified,
seed: "啊 百 族 府 票 划 伪 仓 叶 虾 借 溜 晨 左 等 鬼".into(),
entropy: "21cdd366f337b89b8d1bc1df9fe73047c22b0300000000000000000000000000".into(),
birthday: 1679426817,
has_prefix: false,
has_accent: false,
},
];
for vector in vectors {
let add_whitespace = |mut seed: String| {
seed.push(' ');
seed
};
let seed_without_accents = |seed: &str| {
seed
.split_whitespace()
.map(|w| w.chars().filter(|c| c.is_ascii()).collect::<String>())
.collect::<Vec<_>>()
.join(" ")
};
let trim_seed = |seed: &str| {
let seed_to_trim =
if vector.has_accent { seed_without_accents(seed) } else { seed.to_string() };
seed_to_trim
.split_whitespace()
.map(|w| {
let mut ascii = 0;
let mut to_take = w.len();
for (i, char) in w.chars().enumerate() {
if char.is_ascii() {
ascii += 1;
}
if ascii == polyseed::PREFIX_LEN {
// +1 to include this character, which put us at the prefix length
to_take = i + 1;
break;
}
}
w.chars().take(to_take).collect::<String>()
})
.collect::<Vec<_>>()
.join(" ")
};
// String -> Seed
let seed = Seed::from_string(Zeroizing::new(vector.seed.clone())).unwrap();
// Make sure a version with added whitespace still works
let whitespaced_seed =
Seed::from_string(Zeroizing::new(add_whitespace(vector.seed.clone()))).unwrap();
assert_eq!(seed, whitespaced_seed);
// Check trimmed versions works
if vector.has_prefix {
let trimmed_seed = Seed::from_string(Zeroizing::new(trim_seed(&vector.seed))).unwrap();
assert_eq!(seed, trimmed_seed);
}
// Check versions without accents work
if vector.has_accent {
let seed_without_accents =
Seed::from_string(Zeroizing::new(seed_without_accents(&vector.seed))).unwrap();
assert_eq!(seed, seed_without_accents);
}
let entropy = Zeroizing::new(hex::decode(vector.entropy).unwrap().try_into().unwrap());
assert_eq!(seed.entropy(), entropy);
assert!(seed.birthday().abs_diff(vector.birthday) < polyseed::TIME_STEP);
// Entropy -> Seed
let from_entropy =
Seed::from_entropy(SeedType::Polyseed(vector.language), entropy, Some(seed.birthday()))
.unwrap();
assert_eq!(seed.to_string(), from_entropy.to_string());
// Check against ourselves
{
let seed = Seed::new(&mut OsRng, SeedType::Polyseed(vector.language));
assert_eq!(seed, Seed::from_string(seed.to_string()).unwrap());
assert_eq!(
seed,
Seed::from_entropy(
SeedType::Polyseed(vector.language),
seed.entropy(),
Some(seed.birthday())
)
.unwrap()
);
}
}
}

View File

@@ -0,0 +1,32 @@
use curve25519_dalek::scalar::Scalar;
use crate::unreduced_scalar::*;
#[test]
fn recover_scalars() {
let test_recover = |stored: &str, recovered: &str| {
let stored = UnreducedScalar(hex::decode(stored).unwrap().try_into().unwrap());
let recovered =
Scalar::from_canonical_bytes(hex::decode(recovered).unwrap().try_into().unwrap()).unwrap();
assert_eq!(stored.recover_monero_slide_scalar(), recovered);
};
// https://www.moneroinflation.com/static/data_py/report_scalars_df.pdf
// Table 4.
test_recover(
"cb2be144948166d0a9edb831ea586da0c376efa217871505ad77f6ff80f203f8",
"b8ffd6a1aee47828808ab0d4c8524cb5c376efa217871505ad77f6ff80f20308",
);
test_recover(
"343d3df8a1051c15a400649c423dc4ed58bef49c50caef6ca4a618b80dee22f4",
"21113355bc682e6d7a9d5b3f2137a30259bef49c50caef6ca4a618b80dee2204",
);
test_recover(
"c14f75d612800ca2c1dcfa387a42c9cc086c005bc94b18d204dd61342418eba7",
"4f473804b1d27ab2c789c80ab21d034a096c005bc94b18d204dd61342418eb07",
);
test_recover(
"000102030405060708090a0b0c0d0e0f826c4f6e2329a31bc5bc320af0b2bcbb",
"a124cfd387f461bf3719e03965ee6877826c4f6e2329a31bc5bc320af0b2bc0b",
);
}

View File

@@ -1,31 +1,31 @@
use core::cmp::Ordering;
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use curve25519_dalek::{
scalar::Scalar,
edwards::{EdwardsPoint, CompressedEdwardsY},
};
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use crate::{
Protocol, hash,
serialize::*,
ringct::{RctBase, RctPrunable, RctSignatures},
ring_signatures::RingSignature,
ringct::{bulletproofs::Bulletproofs, RctType, RctBase, RctPrunable, RctSignatures},
};
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Input {
Gen(u64),
ToKey { amount: u64, key_offsets: Vec<u64>, key_image: EdwardsPoint },
ToKey { amount: Option<u64>, key_offsets: Vec<u64>, key_image: EdwardsPoint },
}
impl Input {
// Worst-case predictive len
pub(crate) fn fee_weight(ring_len: usize) -> usize {
pub(crate) fn fee_weight(offsets_weight: usize) -> usize {
// Uses 1 byte for the input type
// Uses 1 byte for the VarInt amount due to amount being 0
// Uses 1 byte for the VarInt encoding of the length of the ring as well
1 + 1 + 1 + (8 * ring_len) + 32
1 + 1 + offsets_weight + 32
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
@@ -37,24 +37,37 @@ impl Input {
Input::ToKey { amount, key_offsets, key_image } => {
w.write_all(&[2])?;
write_varint(amount, w)?;
write_varint(&amount.unwrap_or(0), w)?;
write_vec(write_varint, key_offsets, w)?;
write_point(key_image, w)
}
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Input> {
Ok(match read_byte(r)? {
255 => Input::Gen(read_varint(r)?),
2 => Input::ToKey {
amount: read_varint(r)?,
key_offsets: read_vec(read_varint, r)?,
key_image: read_torsion_free_point(r)?,
},
_ => {
Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown/unused input type"))?
2 => {
let amount = read_varint(r)?;
// https://github.com/monero-project/monero/
// blob/00fd416a99686f0956361d1cd0337fe56e58d4a7/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L860-L863
// A non-RCT 0-amount input can't exist because only RCT TXs can have a 0-amount output
// That's why collapsing to None if the amount is 0 is safe, even without knowing if RCT
let amount = if amount == 0 { None } else { Some(amount) };
Input::ToKey {
amount,
key_offsets: read_vec(read_varint, r)?,
key_image: read_torsion_free_point(r)?,
}
}
_ => Err(io::Error::other("Tried to deserialize unknown/unused input type"))?,
})
}
}
@@ -62,18 +75,20 @@ impl Input {
// Doesn't bother moving to an enum for the unused Script classes
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Output {
pub amount: u64,
pub amount: Option<u64>,
pub key: CompressedEdwardsY,
pub view_tag: Option<u8>,
}
impl Output {
pub(crate) fn fee_weight() -> usize {
1 + 1 + 32 + 1
pub(crate) fn fee_weight(view_tags: bool) -> usize {
// Uses 1 byte for the output type
// Uses 1 byte for the VarInt amount due to amount being 0
1 + 1 + 32 + if view_tags { 1 } else { 0 }
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.amount, w)?;
write_varint(&self.amount.unwrap_or(0), w)?;
w.write_all(&[2 + u8::from(self.view_tag.is_some())])?;
w.write_all(&self.key.to_bytes())?;
if let Some(view_tag) = self.view_tag {
@@ -82,15 +97,27 @@ impl Output {
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Output> {
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(8 + 1 + 32);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(rct: bool, r: &mut R) -> io::Result<Output> {
let amount = read_varint(r)?;
let amount = if rct {
if amount != 0 {
Err(io::Error::other("RCT TX output wasn't 0"))?;
}
None
} else {
Some(amount)
};
let view_tag = match read_byte(r)? {
2 => false,
3 => true,
_ => Err(io::Error::new(
io::ErrorKind::Other,
"Tried to deserialize unknown/unused output type",
))?,
_ => Err(io::Error::other("Tried to deserialize unknown/unused output type"))?,
};
Ok(Output {
@@ -152,13 +179,19 @@ pub struct TransactionPrefix {
}
impl TransactionPrefix {
pub(crate) fn fee_weight(ring_len: usize, inputs: usize, outputs: usize, extra: usize) -> usize {
pub(crate) fn fee_weight(
decoy_weights: &[usize],
outputs: usize,
view_tags: bool,
extra: usize,
) -> usize {
// Assumes Timelock::None since this library won't let you create a TX with a timelock
// 1 input for every decoy weight
1 + 1 +
varint_len(inputs) +
(inputs * Input::fee_weight(ring_len)) +
1 +
(outputs * Output::fee_weight()) +
varint_len(decoy_weights.len()) +
decoy_weights.iter().map(|&offsets_weight| Input::fee_weight(offsets_weight)).sum::<usize>() +
varint_len(outputs) +
(outputs * Output::fee_weight(view_tags)) +
varint_len(extra) +
extra
}
@@ -168,48 +201,72 @@ impl TransactionPrefix {
self.timelock.write(w)?;
write_vec(Input::write, &self.inputs, w)?;
write_vec(Output::write, &self.outputs, w)?;
write_varint(&self.extra.len().try_into().unwrap(), w)?;
write_varint(&self.extra.len(), w)?;
w.write_all(&self.extra)
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<TransactionPrefix> {
let version = read_varint(r)?;
// TODO: Create an enum out of version
if (version == 0) || (version > 2) {
Err(io::Error::other("unrecognized transaction version"))?;
}
let timelock = Timelock::from_raw(read_varint(r)?);
let inputs = read_vec(|r| Input::read(r), r)?;
if inputs.is_empty() {
Err(io::Error::other("transaction had no inputs"))?;
}
let is_miner_tx = matches!(inputs[0], Input::Gen { .. });
let mut prefix = TransactionPrefix {
version: read_varint(r)?,
timelock: Timelock::from_raw(read_varint(r)?),
inputs: read_vec(Input::read, r)?,
outputs: read_vec(Output::read, r)?,
version,
timelock,
inputs,
outputs: read_vec(|r| Output::read((!is_miner_tx) && (version == 2), r), r)?,
extra: vec![],
};
prefix.extra = read_vec(read_byte, r)?;
Ok(prefix)
}
pub fn hash(&self) -> [u8; 32] {
hash(&self.serialize())
}
}
/// Monero transaction. For version 1, rct_signatures still contains an accurate fee value.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Transaction {
pub prefix: TransactionPrefix,
pub signatures: Vec<(Scalar, Scalar)>,
pub signatures: Vec<RingSignature>,
pub rct_signatures: RctSignatures,
}
impl Transaction {
pub(crate) fn fee_weight(
protocol: Protocol,
inputs: usize,
decoy_weights: &[usize],
outputs: usize,
extra: usize,
fee: u64,
) -> usize {
TransactionPrefix::fee_weight(protocol.ring_len(), inputs, outputs, extra) +
RctSignatures::fee_weight(protocol, inputs, outputs)
TransactionPrefix::fee_weight(decoy_weights, outputs, protocol.view_tags(), extra) +
RctSignatures::fee_weight(protocol, decoy_weights.len(), outputs, fee)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.prefix.write(w)?;
if self.prefix.version == 1 {
for sig in &self.signatures {
write_scalar(&sig.0, w)?;
write_scalar(&sig.1, w)?;
for ring_sig in &self.signatures {
ring_sig.write(w)?;
}
Ok(())
} else if self.prefix.version == 2 {
@@ -219,27 +276,59 @@ impl Transaction {
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(2048);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Transaction> {
let prefix = TransactionPrefix::read(r)?;
let mut signatures = vec![];
let mut rct_signatures = RctSignatures {
base: RctBase { fee: 0, ecdh_info: vec![], commitments: vec![] },
base: RctBase { fee: 0, encrypted_amounts: vec![], pseudo_outs: vec![], commitments: vec![] },
prunable: RctPrunable::Null,
};
if prefix.version == 1 {
for _ in 0 .. prefix.inputs.len() {
signatures.push((read_scalar(r)?, read_scalar(r)?));
}
rct_signatures.base.fee = prefix
signatures = prefix
.inputs
.iter()
.map(|input| match input {
Input::Gen(..) => 0,
Input::ToKey { amount, .. } => *amount,
.filter_map(|input| match input {
Input::ToKey { key_offsets, .. } => Some(RingSignature::read(key_offsets.len(), r)),
_ => None,
})
.sum::<u64>()
.saturating_sub(prefix.outputs.iter().map(|output| output.amount).sum());
.collect::<Result<_, _>>()?;
if !matches!(prefix.inputs[0], Input::Gen(..)) {
let in_amount = prefix
.inputs
.iter()
.map(|input| match input {
Input::Gen(..) => Err(io::Error::other("Input::Gen present in non-coinbase v1 TX"))?,
// v1 TXs can burn v2 outputs
// dcff3fe4f914d6b6bd4a5b800cc4cca8f2fdd1bd73352f0700d463d36812f328 is one such TX
// It includes a pre-RCT signature for a RCT output, yet if you interpret the RCT
// output as being worth 0, it passes a sum check (guaranteed since no outputs are RCT)
Input::ToKey { amount, .. } => Ok(amount.unwrap_or(0)),
})
.collect::<io::Result<Vec<_>>>()?
.into_iter()
.sum::<u64>();
let mut out = 0;
for output in &prefix.outputs {
if output.amount.is_none() {
Err(io::Error::other("v1 transaction had a 0-amount output"))?;
}
out += output.amount.unwrap();
}
if in_amount < out {
Err(io::Error::other("transaction spent more than it had as inputs"))?;
}
rct_signatures.base.fee = in_amount - out;
}
} else if prefix.version == 2 {
rct_signatures = RctSignatures::read(
prefix
@@ -254,7 +343,7 @@ impl Transaction {
r,
)?;
} else {
Err(io::Error::new(io::ErrorKind::Other, "Tried to deserialize unknown version"))?;
Err(io::Error::other("Tried to deserialize unknown version"))?;
}
Ok(Transaction { prefix, signatures, rct_signatures })
@@ -268,22 +357,19 @@ impl Transaction {
} else {
let mut hashes = Vec::with_capacity(96);
self.prefix.write(&mut buf).unwrap();
hashes.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hashes.extend(hash(&buf));
buf.clear();
self.rct_signatures.base.write(&mut buf, self.rct_signatures.prunable.rct_type()).unwrap();
hashes.extend(hash(&buf));
buf.clear();
match self.rct_signatures.prunable {
RctPrunable::Null => buf.resize(32, 0),
hashes.extend(&match self.rct_signatures.prunable {
RctPrunable::Null => [0; 32],
_ => {
self.rct_signatures.prunable.write(&mut buf).unwrap();
buf = hash(&buf).to_vec();
self.rct_signatures.prunable.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hash(&buf)
}
}
hashes.extend(&buf);
});
hash(&hashes)
}
@@ -291,14 +377,16 @@ impl Transaction {
/// Calculate the hash of this transaction as needed for signing it.
pub fn signature_hash(&self) -> [u8; 32] {
if self.prefix.version == 1 {
return self.prefix.hash();
}
let mut buf = Vec::with_capacity(2048);
let mut sig_hash = Vec::with_capacity(96);
self.prefix.write(&mut buf).unwrap();
sig_hash.extend(hash(&buf));
buf.clear();
sig_hash.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.prunable.rct_type()).unwrap();
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
sig_hash.extend(hash(&buf));
buf.clear();
@@ -307,4 +395,39 @@ impl Transaction {
hash(&sig_hash)
}
fn is_rct_bulletproof(&self) -> bool {
match &self.rct_signatures.rct_type() {
RctType::Bulletproofs | RctType::BulletproofsCompactAmount | RctType::Clsag => true,
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::BulletproofsPlus => false,
}
}
fn is_rct_bulletproof_plus(&self) -> bool {
match &self.rct_signatures.rct_type() {
RctType::BulletproofsPlus => true,
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag => false,
}
}
/// Calculate the transaction's weight.
pub fn weight(&self) -> usize {
let blob_size = self.serialize().len();
let bp = self.is_rct_bulletproof();
let bp_plus = self.is_rct_bulletproof_plus();
if !(bp || bp_plus) {
blob_size
} else {
blob_size + Bulletproofs::calculate_bp_clawback(bp_plus, self.prefix.outputs.len()).0
}
}
}

View File

@@ -0,0 +1,137 @@
use core::cmp::Ordering;
use std_shims::{
sync::OnceLock,
io::{self, *},
};
use curve25519_dalek::scalar::Scalar;
use crate::serialize::*;
static PRECOMPUTED_SCALARS_CELL: OnceLock<[Scalar; 8]> = OnceLock::new();
/// Precomputed scalars used to recover an incorrectly reduced scalar.
#[allow(non_snake_case)]
pub(crate) fn PRECOMPUTED_SCALARS() -> [Scalar; 8] {
*PRECOMPUTED_SCALARS_CELL.get_or_init(|| {
let mut precomputed_scalars = [Scalar::ONE; 8];
for (i, scalar) in precomputed_scalars.iter_mut().enumerate().skip(1) {
*scalar = Scalar::from(((i * 2) + 1) as u8);
}
precomputed_scalars
})
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct UnreducedScalar(pub [u8; 32]);
impl UnreducedScalar {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.0)
}
pub fn read<R: Read>(r: &mut R) -> io::Result<UnreducedScalar> {
Ok(UnreducedScalar(read_bytes(r)?))
}
pub fn as_bytes(&self) -> &[u8; 32] {
&self.0
}
fn as_bits(&self) -> [u8; 256] {
let mut bits = [0; 256];
for (i, bit) in bits.iter_mut().enumerate() {
*bit = core::hint::black_box(1 & (self.0[i / 8] >> (i % 8)))
}
bits
}
/// Computes the non-adjacent form of this scalar with width 5.
///
/// This matches Monero's `slide` function and intentionally gives incorrect outputs under
/// certain conditions in order to match Monero.
///
/// This function does not execute in constant time.
fn non_adjacent_form(&self) -> [i8; 256] {
let bits = self.as_bits();
let mut naf = [0i8; 256];
for (b, bit) in bits.into_iter().enumerate() {
naf[b] = bit as i8;
}
for i in 0 .. 256 {
if naf[i] != 0 {
// if the bit is a one, work our way up through the window
// combining the bits with this bit.
for b in 1 .. 6 {
if (i + b) >= 256 {
// if we are at the length of the array then break out
// the loop.
break;
}
// potential_carry - the value of the bit at i+b compared to the bit at i
let potential_carry = naf[i + b] << b;
if potential_carry != 0 {
if (naf[i] + potential_carry) <= 15 {
// if our current "bit" plus the potential carry is less than 16
// add it to our current "bit" and set the potential carry bit to 0.
naf[i] += potential_carry;
naf[i + b] = 0;
} else if (naf[i] - potential_carry) >= -15 {
// else if our current "bit" minus the potential carry is more than -16
// take it away from our current "bit".
// we then work our way up through the bits setting ones to zero, when
// we hit the first zero we change it to one then stop, this is to factor
// in the minus.
naf[i] -= potential_carry;
#[allow(clippy::needless_range_loop)]
for k in (i + b) .. 256 {
if naf[k] == 0 {
naf[k] = 1;
break;
}
naf[k] = 0;
}
} else {
break;
}
}
}
}
}
naf
}
/// Recover the scalar that an array of bytes was incorrectly interpreted as by Monero's `slide`
/// function.
///
/// In Borromean range proofs Monero was not checking that the scalars used were
/// reduced. This lead to the scalar stored being interpreted as a different scalar,
/// this function recovers that scalar.
///
/// See: https://github.com/monero-project/monero/issues/8438
pub fn recover_monero_slide_scalar(&self) -> Scalar {
if self.0[31] & 128 == 0 {
// Computing the w-NAF of a number can only give an output with 1 more bit than
// the number, so even if the number isn't reduced, the `slide` function will be
// correct when the last bit isn't set.
return Scalar::from_bytes_mod_order(self.0);
}
let precomputed_scalars = PRECOMPUTED_SCALARS();
let mut recovered = Scalar::ZERO;
for &numb in self.non_adjacent_form().iter().rev() {
recovered += recovered;
match numb.cmp(&0) {
Ordering::Greater => recovered += precomputed_scalars[(numb as usize) / 2],
Ordering::Less => recovered -= precomputed_scalars[((-numb) as usize) / 2],
Ordering::Equal => (),
}
}
recovered
}
}

View File

@@ -1,7 +1,5 @@
use core::{marker::PhantomData, fmt::Debug};
use std::string::ToString;
use thiserror::Error;
use std_shims::string::{String, ToString};
use zeroize::Zeroize;
@@ -60,7 +58,7 @@ pub enum AddressSpec {
}
impl AddressType {
pub fn subaddress(&self) -> bool {
pub fn is_subaddress(&self) -> bool {
matches!(self, AddressType::Subaddress) ||
matches!(self, AddressType::Featured { subaddress: true, .. })
}
@@ -75,7 +73,7 @@ impl AddressType {
}
}
pub fn guaranteed(&self) -> bool {
pub fn is_guaranteed(&self) -> bool {
matches!(self, AddressType::Featured { guaranteed: true, .. })
}
}
@@ -114,19 +112,20 @@ impl<B: AddressBytes> Zeroize for AddressMeta<B> {
}
/// Error when decoding an address.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Error)]
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum AddressError {
#[error("invalid address byte")]
#[cfg_attr(feature = "std", error("invalid address byte"))]
InvalidByte,
#[error("invalid address encoding")]
#[cfg_attr(feature = "std", error("invalid address encoding"))]
InvalidEncoding,
#[error("invalid length")]
#[cfg_attr(feature = "std", error("invalid length"))]
InvalidLength,
#[error("invalid key")]
#[cfg_attr(feature = "std", error("invalid key"))]
InvalidKey,
#[error("unknown features")]
#[cfg_attr(feature = "std", error("unknown features"))]
UnknownFeatures,
#[error("different network than expected")]
#[cfg_attr(feature = "std", error("different network than expected"))]
DifferentNetwork,
}
@@ -169,27 +168,40 @@ impl<B: AddressBytes> AddressMeta<B> {
meta.ok_or(AddressError::InvalidByte)
}
pub fn subaddress(&self) -> bool {
self.kind.subaddress()
pub fn is_subaddress(&self) -> bool {
self.kind.is_subaddress()
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
self.kind.payment_id()
}
pub fn guaranteed(&self) -> bool {
self.kind.guaranteed()
pub fn is_guaranteed(&self) -> bool {
self.kind.is_guaranteed()
}
}
/// A Monero address, composed of metadata and a spend/view key.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[derive(Clone, Copy, PartialEq, Eq)]
pub struct Address<B: AddressBytes> {
pub meta: AddressMeta<B>,
pub spend: EdwardsPoint,
pub view: EdwardsPoint,
}
impl<B: AddressBytes> core::fmt::Debug for Address<B> {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt
.debug_struct("Address")
.field("meta", &self.meta)
.field("spend", &hex::encode(self.spend.compress().0))
.field("view", &hex::encode(self.view.compress().0))
// This is not a real field yet is the most valuable thing to know when debugging
.field("(address)", &self.to_string())
.finish()
}
}
impl<B: AddressBytes> Zeroize for Address<B> {
fn zeroize(&mut self) {
self.meta.zeroize();
@@ -285,16 +297,16 @@ impl<B: AddressBytes> Address<B> {
self.meta.network
}
pub fn subaddress(&self) -> bool {
self.meta.subaddress()
pub fn is_subaddress(&self) -> bool {
self.meta.is_subaddress()
}
pub fn payment_id(&self) -> Option<[u8; 8]> {
self.meta.payment_id()
}
pub fn guaranteed(&self) -> bool {
self.meta.guaranteed()
pub fn is_guaranteed(&self) -> bool {
self.meta.is_guaranteed()
}
}

View File

@@ -1,17 +1,26 @@
use std::{sync::Mutex, collections::HashSet};
use std_shims::{vec::Vec, collections::HashSet};
use lazy_static::lazy_static;
#[cfg(feature = "cache-distribution")]
use std_shims::sync::OnceLock;
#[cfg(all(feature = "cache-distribution", not(feature = "std")))]
use std_shims::sync::Mutex;
#[cfg(all(feature = "cache-distribution", feature = "std"))]
use futures::lock::Mutex;
use zeroize::{Zeroize, ZeroizeOnDrop};
use rand_core::{RngCore, CryptoRng};
use rand_distr::{Distribution, Gamma};
use zeroize::{Zeroize, ZeroizeOnDrop};
#[cfg(not(feature = "std"))]
use rand_distr::num_traits::Float;
use curve25519_dalek::edwards::EdwardsPoint;
use crate::{
serialize::varint_len,
wallet::SpendableOutput,
rpc::{RpcError, Rpc},
rpc::{RpcError, RpcConnection, Rpc},
};
const LOCK_WINDOW: usize = 10;
@@ -21,15 +30,21 @@ const BLOCK_TIME: usize = 120;
const BLOCKS_PER_YEAR: usize = 365 * 24 * 60 * 60 / BLOCK_TIME;
const TIP_APPLICATION: f64 = (LOCK_WINDOW * BLOCK_TIME) as f64;
lazy_static! {
static ref GAMMA: Gamma<f64> = Gamma::new(19.28, 1.0 / 1.61).unwrap();
static ref DISTRIBUTION: Mutex<Vec<u64>> = Mutex::new(Vec::with_capacity(3000000));
// TODO: Resolve safety of this in case a reorg occurs/the network changes
// TODO: Update this when scanning a block, as possible
#[cfg(feature = "cache-distribution")]
static DISTRIBUTION_CELL: OnceLock<Mutex<Vec<u64>>> = OnceLock::new();
#[cfg(feature = "cache-distribution")]
#[allow(non_snake_case)]
fn DISTRIBUTION() -> &'static Mutex<Vec<u64>> {
DISTRIBUTION_CELL.get_or_init(|| Mutex::new(Vec::with_capacity(3000000)))
}
#[allow(clippy::too_many_arguments)]
async fn select_n<R: RngCore + CryptoRng>(
async fn select_n<'a, R: RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc,
rpc: &Rpc<RPC>,
distribution: &[u64],
height: usize,
high: u64,
per_second: f64,
@@ -37,6 +52,12 @@ async fn select_n<R: RngCore + CryptoRng>(
used: &mut HashSet<u64>,
count: usize,
) -> Result<Vec<(u64, [EdwardsPoint; 2])>, RpcError> {
if height >= rpc.get_height().await? {
// TODO: Don't use InternalError for the caller's failure
Err(RpcError::InternalError("decoys being requested from too young blocks"))?;
}
#[cfg(test)]
let mut iters = 0;
let mut confirmed = Vec::with_capacity(count);
// Retries on failure. Retries are obvious as decoys, yet should be minimal
@@ -44,14 +65,17 @@ async fn select_n<R: RngCore + CryptoRng>(
let remaining = count - confirmed.len();
let mut candidates = Vec::with_capacity(remaining);
while candidates.len() != remaining {
iters += 1;
// This is cheap and on fresh chains, thousands of rounds may be needed
if iters == 10000 {
Err(RpcError::InternalError("not enough decoy candidates"))?;
#[cfg(test)]
{
iters += 1;
// This is cheap and on fresh chains, a lot of rounds may be needed
if iters == 100 {
Err(RpcError::InternalError("hit decoy selection round limit"))?;
}
}
// Use a gamma distribution
let mut age = GAMMA.sample(rng).exp();
let mut age = Gamma::<f64>::new(19.28, 1.0 / 1.61).unwrap().sample(rng).exp();
if age > TIP_APPLICATION {
age -= TIP_APPLICATION;
} else {
@@ -61,7 +85,6 @@ async fn select_n<R: RngCore + CryptoRng>(
let o = (age * per_second) as u64;
if o < high {
let distribution = DISTRIBUTION.lock().unwrap();
let i = distribution.partition_point(|s| *s < (high - 1 - o));
let prev = i.saturating_sub(1);
let n = distribution[i] - distribution[prev];
@@ -118,24 +141,39 @@ fn offset(ring: &[u64]) -> Vec<u64> {
/// Decoy data, containing the actual member as well (at index `i`).
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct Decoys {
pub i: u8,
pub offsets: Vec<u64>,
pub ring: Vec<[EdwardsPoint; 2]>,
pub(crate) i: u8,
pub(crate) offsets: Vec<u64>,
pub(crate) ring: Vec<[EdwardsPoint; 2]>,
}
#[allow(clippy::len_without_is_empty)]
impl Decoys {
pub fn fee_weight(offsets: &[u64]) -> usize {
varint_len(offsets.len()) + offsets.iter().map(|offset| varint_len(*offset)).sum::<usize>()
}
pub fn len(&self) -> usize {
self.offsets.len()
}
/// Select decoys using the same distribution as Monero.
pub async fn select<R: RngCore + CryptoRng>(
pub async fn select<R: RngCore + CryptoRng, RPC: RpcConnection>(
rng: &mut R,
rpc: &Rpc,
rpc: &Rpc<RPC>,
ring_len: usize,
height: usize,
inputs: &[SpendableOutput],
) -> Result<Vec<Decoys>, RpcError> {
#[cfg(feature = "cache-distribution")]
#[cfg(not(feature = "std"))]
let mut distribution = DISTRIBUTION().lock();
#[cfg(feature = "cache-distribution")]
#[cfg(feature = "std")]
let mut distribution = DISTRIBUTION().lock().await;
#[cfg(not(feature = "cache-distribution"))]
let mut distribution = vec![];
let decoy_count = ring_len - 1;
// Convert the inputs in question to the raw output data
@@ -146,29 +184,19 @@ impl Decoys {
outputs.push((real[real.len() - 1], [input.key(), input.commitment().calculate()]));
}
let distribution_len = {
let distribution = DISTRIBUTION.lock().unwrap();
distribution.len()
};
if distribution_len <= height {
let extension = rpc.get_output_distribution(distribution_len, height).await?;
DISTRIBUTION.lock().unwrap().extend(extension);
if distribution.len() <= height {
let extension = rpc.get_output_distribution(distribution.len(), height).await?;
distribution.extend(extension);
}
// If asked to use an older height than previously asked, truncate to ensure accuracy
// Should never happen, yet risks desyncing if it did
distribution.truncate(height + 1); // height is inclusive, and 0 is a valid height
let high;
let per_second;
{
let mut distribution = DISTRIBUTION.lock().unwrap();
// If asked to use an older height than previously asked, truncate to ensure accuracy
// Should never happen, yet risks desyncing if it did
distribution.truncate(height + 1); // height is inclusive, and 0 is a valid height
high = distribution[distribution.len() - 1];
per_second = {
let blocks = distribution.len().min(BLOCKS_PER_YEAR);
let outputs = high - distribution[distribution.len().saturating_sub(blocks + 1)];
(outputs as f64) / ((blocks * BLOCK_TIME) as f64)
};
let high = distribution[distribution.len() - 1];
let per_second = {
let blocks = distribution.len().min(BLOCKS_PER_YEAR);
let outputs = high - distribution[distribution.len().saturating_sub(blocks + 1)];
(outputs as f64) / ((blocks * BLOCK_TIME) as f64)
};
let mut used = HashSet::<u64>::new();
@@ -176,7 +204,7 @@ impl Decoys {
used.insert(o.0);
}
// TODO: Simply create a TX with less than the target amount
// TODO: Create a TX with less than the target amount, as allowed by the protocol
if (high - MATURITY) < u64::try_from(inputs.len() * ring_len).unwrap() {
Err(RpcError::InternalError("not enough decoy candidates"))?;
}
@@ -184,9 +212,18 @@ impl Decoys {
// Select all decoys for this transaction, assuming we generate a sane transaction
// We should almost never naturally generate an insane transaction, hence why this doesn't
// bother with an overage
let mut decoys =
select_n(rng, rpc, height, high, per_second, &real, &mut used, inputs.len() * decoy_count)
.await?;
let mut decoys = select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&real,
&mut used,
inputs.len() * decoy_count,
)
.await?;
real.zeroize();
let mut res = Vec::with_capacity(inputs.len());
@@ -224,8 +261,18 @@ impl Decoys {
// Select new outputs until we have a full sized ring again
ring.extend(
select_n(rng, rpc, height, high, per_second, &[], &mut used, ring_len - ring.len())
.await?,
select_n(
rng,
rpc,
&distribution,
height,
high,
per_second,
&[],
&mut used,
ring_len - ring.len(),
)
.await?,
);
ring.sort_by(|a, b| a.0.cmp(&b.0));
}

View File

@@ -1,5 +1,8 @@
use core::ops::BitXor;
use std::io::{self, Read, Write};
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
@@ -12,8 +15,16 @@ use crate::serialize::{
pub const MAX_TX_EXTRA_NONCE_SIZE: usize = 255;
pub const PAYMENT_ID_MARKER: u8 = 0;
pub const ENCRYPTED_PAYMENT_ID_MARKER: u8 = 1;
// Used as it's the highest value not interpretable as a continued VarInt
pub const ARBITRARY_DATA_MARKER: u8 = 127;
// 1 byte is used for the marker
pub const MAX_ARBITRARY_DATA_SIZE: usize = MAX_TX_EXTRA_NONCE_SIZE - 1;
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub(crate) enum PaymentId {
pub enum PaymentId {
Unencrypted([u8; 32]),
Encrypted([u8; 8]),
}
@@ -23,6 +34,7 @@ impl BitXor<[u8; 8]> for PaymentId {
fn bitxor(self, bytes: [u8; 8]) -> PaymentId {
match self {
// Don't perform the xor since this isn't intended to be encrypted with xor
PaymentId::Unencrypted(_) => self,
PaymentId::Encrypted(id) => {
PaymentId::Encrypted((u64::from_le_bytes(id) ^ u64::from_le_bytes(bytes)).to_le_bytes())
@@ -32,32 +44,32 @@ impl BitXor<[u8; 8]> for PaymentId {
}
impl PaymentId {
pub(crate) fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
PaymentId::Unencrypted(id) => {
w.write_all(&[0])?;
w.write_all(&[PAYMENT_ID_MARKER])?;
w.write_all(id)?;
}
PaymentId::Encrypted(id) => {
w.write_all(&[1])?;
w.write_all(&[ENCRYPTED_PAYMENT_ID_MARKER])?;
w.write_all(id)?;
}
}
Ok(())
}
fn read<R: Read>(r: &mut R) -> io::Result<PaymentId> {
pub fn read<R: Read>(r: &mut R) -> io::Result<PaymentId> {
Ok(match read_byte(r)? {
0 => PaymentId::Unencrypted(read_bytes(r)?),
1 => PaymentId::Encrypted(read_bytes(r)?),
_ => Err(io::Error::new(io::ErrorKind::Other, "unknown payment ID type"))?,
_ => Err(io::Error::other("unknown payment ID type"))?,
})
}
}
// Doesn't bother with padding nor MinerGate
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) enum ExtraField {
pub enum ExtraField {
PublicKey(EdwardsPoint),
Nonce(Vec<u8>),
MergeMining(usize, [u8; 32]),
@@ -65,7 +77,7 @@ pub(crate) enum ExtraField {
}
impl ExtraField {
fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
ExtraField::PublicKey(key) => {
w.write_all(&[1])?;
@@ -88,43 +100,43 @@ impl ExtraField {
Ok(())
}
fn read<R: Read>(r: &mut R) -> io::Result<ExtraField> {
pub fn read<R: Read>(r: &mut R) -> io::Result<ExtraField> {
Ok(match read_byte(r)? {
1 => ExtraField::PublicKey(read_point(r)?),
2 => ExtraField::Nonce({
let nonce = read_vec(read_byte, r)?;
if nonce.len() > MAX_TX_EXTRA_NONCE_SIZE {
Err(io::Error::new(io::ErrorKind::Other, "too long nonce"))?;
Err(io::Error::other("too long nonce"))?;
}
nonce
}),
3 => ExtraField::MergeMining(
usize::try_from(read_varint(r)?)
.map_err(|_| io::Error::new(io::ErrorKind::Other, "varint for height exceeds usize"))?,
read_bytes(r)?,
),
3 => ExtraField::MergeMining(read_varint(r)?, read_bytes(r)?),
4 => ExtraField::PublicKeys(read_vec(read_point, r)?),
_ => Err(io::Error::new(io::ErrorKind::Other, "unknown extra field"))?,
_ => Err(io::Error::other("unknown extra field"))?,
})
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct Extra(Vec<ExtraField>);
pub struct Extra(Vec<ExtraField>);
impl Extra {
pub(crate) fn keys(&self) -> Vec<EdwardsPoint> {
let mut keys = Vec::with_capacity(2);
pub fn keys(&self) -> Option<(EdwardsPoint, Option<Vec<EdwardsPoint>>)> {
let mut key = None;
let mut additional = None;
for field in &self.0 {
match field.clone() {
ExtraField::PublicKey(key) => keys.push(key),
ExtraField::PublicKeys(additional) => keys.extend(additional),
ExtraField::PublicKey(this_key) => key = key.or(Some(this_key)),
ExtraField::PublicKeys(these_additional) => {
additional = additional.or(Some(these_additional))
}
_ => (),
}
}
keys
// Don't return any keys if this was non-standard and didn't include the primary key
key.map(|key| (key, additional))
}
pub(crate) fn payment_id(&self) -> Option<PaymentId> {
pub fn payment_id(&self) -> Option<PaymentId> {
for field in &self.0 {
if let ExtraField::Nonce(data) = field {
return PaymentId::read::<&[u8]>(&mut data.as_ref()).ok();
@@ -133,29 +145,23 @@ impl Extra {
None
}
pub(crate) fn data(&self) -> Vec<Vec<u8>> {
let mut first = true;
pub fn data(&self) -> Vec<Vec<u8>> {
let mut res = vec![];
for field in &self.0 {
if let ExtraField::Nonce(data) = field {
// Skip the first Nonce, which should be the payment ID
if first {
first = false;
continue;
if data[0] == ARBITRARY_DATA_MARKER {
res.push(data[1 ..].to_vec());
}
res.push(data.clone());
}
}
res
}
pub(crate) fn new(mut keys: Vec<EdwardsPoint>) -> Extra {
pub(crate) fn new(key: EdwardsPoint, additional: Vec<EdwardsPoint>) -> Extra {
let mut res = Extra(Vec::with_capacity(3));
if !keys.is_empty() {
res.push(ExtraField::PublicKey(keys[0]));
}
if keys.len() > 1 {
res.push(ExtraField::PublicKeys(keys.drain(1 ..).collect()));
res.push(ExtraField::PublicKey(key));
if !additional.is_empty() {
res.push(ExtraField::PublicKeys(additional));
}
res
}
@@ -165,25 +171,36 @@ impl Extra {
}
#[rustfmt::skip]
pub(crate) fn fee_weight(outputs: usize, data: &[Vec<u8>]) -> usize {
pub(crate) fn fee_weight(
outputs: usize,
additional: bool,
payment_id: bool,
data: &[Vec<u8>]
) -> usize {
// PublicKey, key
(1 + 32) +
// PublicKeys, length, additional keys
(1 + 1 + (outputs.saturating_sub(1) * 32)) +
(if additional { 1 + 1 + (outputs * 32) } else { 0 }) +
// PaymentId (Nonce), length, encrypted, ID
(1 + 1 + 1 + 8) +
// Nonce, length, data (if existent)
data.iter().map(|v| 1 + varint_len(v.len()) + v.len()).sum::<usize>()
(if payment_id { 1 + 1 + 1 + 8 } else { 0 }) +
// Nonce, length, ARBITRARY_DATA_MARKER, data
data.iter().map(|v| 1 + varint_len(1 + v.len()) + 1 + v.len()).sum::<usize>()
}
pub(crate) fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for field in &self.0 {
field.write(w)?;
}
Ok(())
}
pub(crate) fn read<R: Read>(r: &mut R) -> io::Result<Extra> {
pub fn serialize(&self) -> Vec<u8> {
let mut buf = vec![];
self.write(&mut buf).unwrap();
buf
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Extra> {
let mut res = Extra(vec![]);
let mut field;
while {

Some files were not shown because too many files have changed in this diff Show More