890 Commits

Author SHA1 Message Date
Luke Parker
3ce90c55d9 Define a 512 KiB block size limit 2025-12-02 21:24:05 -05:00
Luke Parker
ff95c58341 Round out the runtime
Ensures the block's size limit is respected.

Defines a policy for weights. While I'm unsure I want to commit to this
forever, I do want to acknowledge it's valid and well-defined.

Cleans up the `serai-runtime` crate a bit with further modules in the `wasm`
folder.
2025-12-02 21:16:34 -05:00
Luke Parker
98044f93b1 Stub the in-instructions pallet 2025-12-02 16:46:10 -05:00
Luke Parker
eb04f873d5 Stub the genesis-liquidity pallet 2025-12-02 16:46:06 -05:00
Luke Parker
af74c318aa Add event emissions to the DEX pallet 2025-12-02 13:31:33 -05:00
Luke Parker
d711d8915f Update docs Ruby/gem versions 2025-12-02 13:20:17 -05:00
Luke Parker
3d549564a8 Misc tweaks in the style of the last commit
Notably removes the `kvdb-rocksdb` patch via updating the Substrate version
used to one which disables the `jemalloc` feature itself.

Simplifies the path of the built WASM file within the Dockerfile to consumers.
This also ensures if the image is built, the path of the WASM file is as
expected (prior unasserted).
2025-12-02 09:10:44 -05:00
Luke Parker
9a75f92864 Thoroughly update versions and methodology
For hash-pinned dependencies, adds comments documenting the associated
versions.

Adds a pin to `slither-analyzer` which was prior missing.

Updates to Monero 0.18.4.4.

`mimalloc` now has the correct option set when building for `musl`. A C++
compiler is no longer required in its Docker image.

The runtime's `Dockerfile` now symlinks a `libc.so` already present on the
image instead of creating one itself. It also builds the runtime within the
image to ensure it only happens once. The test to ensure the methodology is
reproducible has been updated to not simply create containers from the image,
yet rebuild the image entirely, accordingly. This also is more robust and
arguably should have already been done.

The pin to the exact hash of the `patch-polkadot-sdk` repo in every
`Cargo.toml` has been removed. The lockfile already serves that role,
simplifying updating in the future.

The latest Rust nightly is adopted as well (superseding
https://github.com/serai-dex/serai/pull/697).

The `librocksdb-sys` patch is replaced with a `kvdb-rocksdb` patch, removing a
git dependency, thanks to https://github.com/paritytech/parity-common/pull/950.
2025-12-01 18:17:01 -05:00
Luke Parker
30ea9d9a06 Tidy the DEX pallet 2025-11-30 21:42:27 -05:00
Luke Parker
c45c973ca1 Remove musl-dev from runtime/Dockerfile
It wasn't pinned with a hash yet with a version tag. This ensures we are
deterministic to the image (specified by hash), `Cargo.lock`, and source code
alone.

Unfortunately, this was incredibly annoying to do, the exact process uncovering
a SIGSEGV in stable Rust. The extensive documentation details the solution.
Thankfully, it works now.
2025-11-27 03:37:37 -05:00
Luke Parker
6e37ac030d Add patch for alloy-eip2124 to an empty crate
Removes the `crc` dependency which had a unique author associated.
2025-11-26 17:01:03 -05:00
Luke Parker
e7c759c468 Improve substrate-median tests
The use of a dedicated test module ensures the API doesn't hide anything which
needs to be public. There's also now explicit tests for when the median is the
popped value.
2025-11-25 23:46:12 -05:00
Luke Parker
8ec0582237 Add module to calculate medians 2025-11-25 22:39:52 -05:00
Luke Parker
8d8e8a7a77 Remove unnecessary MSRVs from patches/ 2025-11-25 17:05:30 -05:00
Luke Parker
028ec3cce0 borsh 1.6.0
Bumps th MSRV for some of our crates, which is fine.
2025-11-25 16:58:19 -05:00
Luke Parker
c49215805f Update Substrate 2025-11-25 00:06:54 -05:00
Luke Parker
2ffdd2a01d Update monero-oxide, Substrate 2025-11-22 11:49:25 -05:00
Luke Parker
e1e6e67d4a Ensure desired pruning behavior is held within the node 2025-11-18 21:46:58 -05:00
Luke Parker
6b19780c7b Remove historical state access from Serai
Resolves https://github.com/serai-dex/serai/issues/694.
2025-11-18 21:27:47 -05:00
Luke Parker
6100c3ca90 Restore patches/dalek-ff-group
Ensures `crypto/dalek-ff-group` is pure.
2025-11-16 19:04:57 -05:00
Luke Parker
fa0ed4b180 Add validator sets RPC functions necessary for the coordinator 2025-11-16 17:38:08 -05:00
Luke Parker
0ea16f9e01 doc_auto_cfg -> doc_cfg 2025-11-16 17:38:08 -05:00
Luke Parker
7a314baa9f Update all of serai-coordinator to compile with the new serai-client-serai 2025-11-16 17:38:03 -05:00
Luke Parker
9891ccade8 Add From<*::Call> for Call to serai-abi 2025-11-16 16:43:06 -05:00
Luke Parker
f1f166c168 Restore publish_transaction RPC to Serai 2025-11-16 16:43:06 -05:00
Luke Parker
df4aee2d59 Update serai-client to solely be an umbrella crate of the dedicated client libraries 2025-11-16 16:43:05 -05:00
Luke Parker
302a43653f Add helper to get TemporalSerai as of the latest finalized block 2025-11-16 16:42:36 -05:00
Luke Parker
d219b77bd0 Add bindings to the events from the coins module to serai-client-serai 2025-11-15 16:13:25 -05:00
Luke Parker
fce26eaee1 Update the block RPCs to return null when missing, not an error
Promotes clarity.
2025-11-15 16:12:39 -05:00
Luke Parker
3cfbd9add7 Update serai-coordinator-cosign to the new serai-client-serai 2025-11-15 16:10:58 -05:00
Luke Parker
609cf06393 Update SetDecided to include the validators
Necessary for light-client protocols to follow along with consensus. Arguably,
anyone handling GRANDPA's consensus could peek at this through the consensus
commit anyways, but we shouldn't so defer that.
2025-11-15 14:59:33 -05:00
Luke Parker
46b1f1b7ec Add test for the integrity of headers 2025-11-14 12:04:21 -05:00
Luke Parker
09113201e7 Fixes to the validator sets RPC 2025-11-14 11:19:02 -05:00
Luke Parker
556d294157 Add pallet-timestamp
Ensures the timestamp is sent, within expected parameters, and the correctness
in relation to `pallet-babe`.
2025-11-14 09:59:32 -05:00
Luke Parker
82ca889ed3 Wrap Proposer so we can add the SeraiPreExecutionDigest (timestamps) 2025-11-14 08:02:54 -05:00
Luke Parker
cde0f753c2 Correct Serai header on genesis
Includes a couple misc fixes for the RPC as well.
2025-11-14 07:22:59 -05:00
Luke Parker
6ff0ef7aa6 Type the errors yielded by serai-node's RPC 2025-11-14 03:52:35 -05:00
Luke Parker
f9e3d1b142 Expand validator sets API with the rest of the events and some getters
We could've added a storage API, and fetched fields that way, except we want
the storage to be opaque. That meant we needed to add the RPC routes to the
node, which also simplifies other people writing RPC code and fetching these
fields. Then the node could've used the storage API, except a lot of the
storage in validator-sets is marked opaque and to only be read via functions,
so extending the runtime made the most sense.
2025-11-14 03:37:06 -05:00
Luke Parker
a793aa18ef Make ethereum-schnorr-contract no-std and no-alloc eligible 2025-11-13 05:48:18 -05:00
Luke Parker
5662beeb8a Patch from parity-bip39 back to bip39
Per https://github.com/michalkucharczyk/rust-bip39/tree/mku-2.0.1-release,
`parity-bip39` was a fork to publish a release of `bip39` with two specific PRs
merged. Not only have those PRs been merged, yet `bip39` now accepts
`bitcoin_hashes 0.14` (https://github.com/rust-bitcoin/rust-bip39/pull/76),
making this a great time to reconcile (even though it does technically add a
git dependency until the new release is cut...).
2025-11-13 05:25:41 -05:00
Luke Parker
509bd58f4e Add method to fetch a block's events to the RPC 2025-11-13 05:13:55 -05:00
Luke Parker
367a5769e8 Update deny.toml 2025-11-13 00:37:51 -05:00
Luke Parker
cb6eb6430a Update version of substrate 2025-11-13 00:17:19 -05:00
Luke Parker
4f82e5912c Use Alpine to build the runtime
Smaller, works without issue.
2025-11-12 23:02:22 -05:00
Luke Parker
ac7af40f2e Remove rust-src as a component for WASM
It's unnecessary since `wasm32v1-none`.
2025-11-12 23:00:57 -05:00
Luke Parker
264bdd46ca Update serai-runtime to compile a minimum subset of itself for non-WASM targets
We only really care about it as a WASM blob, given `serai-abi`, so there's no
need to compile it twice when it's an expensive blob and we don't care about it
at all.
2025-11-12 23:00:00 -05:00
Luke Parker
c52f7634de Patch librocksdb-sys to never enable jemalloc, which conflicts with mimalloc
Allows us to update mimalloc and enable the newly added guard pages.

Conflict identified by @PlasmaPower.
2025-11-11 23:04:05 -05:00
Luke Parker
21eaa5793d Error when not running the dev network if an explicit WASM runtime isn't provided 2025-11-11 23:02:20 -05:00
Luke Parker
c744a80d80 Check the finalized function doesn't claim unfinalized blocks are in fact finalized 2025-11-11 23:02:20 -05:00
Luke Parker
a34f9f6164 Build and run the message queue over Alpine
We prior stopped doing so for stability reasons, but this _should_ be tried
again.
2025-11-11 23:02:20 -05:00
Luke Parker
353683cfd2 revm 33 2025-11-11 23:02:16 -05:00
Luke Parker
d4f77159c4 Rust 1.91.1 due to the regression re: wasm builds 2025-11-11 09:08:30 -05:00
Luke Parker
191bf4bdea Remove std feature from revm
It's unnecessary and bloats the tree decently.
2025-11-10 06:34:33 -05:00
Luke Parker
06a4824aba Move bitcoin-serai to core-json and feature-gate the RPC functionality 2025-11-10 05:31:13 -05:00
Luke Parker
e65a37e639 Update various versions 2025-11-10 04:02:02 -05:00
Luke Parker
4653ef4a61 Use dockertest for the newly added serai-client-serai test 2025-11-07 02:08:02 -05:00
Luke Parker
ce08fad931 Add initial basic tests for serai-client-serai 2025-11-06 20:12:37 -05:00
Luke Parker
1866bb7ae3 Begin work on the new RPC for the new node 2025-11-06 03:08:43 -05:00
Luke Parker
aff2065c31 Polkadot stable2509-1 2025-11-06 00:23:35 -05:00
Luke Parker
7300700108 Update misc versions 2025-11-05 19:11:33 -05:00
Luke Parker
31874ceeae serai-node which compiles and produces/finalizes blocks with --dev 2025-11-05 18:20:23 -05:00
Luke Parker
012b8fddae Get serai-node to compile again 2025-11-05 01:18:21 -05:00
Luke Parker
d2f58232c8 Tweak serai-coordinator-cosign to make it closer to compiling again
Adds `PartialOrd, Ord` derivations to some items in `serai-primitives` so they
may be used as keys within `borsh` maps.
2025-11-04 19:42:23 -05:00
Luke Parker
49794b6a75 Correct when spin::Lazy is exposed as std_shims::sync::LazyLock
It's intended to always be used, even on `std`, when `std::sync::LazyLock` is
not available.
2025-11-04 19:28:26 -05:00
Luke Parker
973287d0a1 Smash serai-client so the processors don't need the entire lib to access their specific code
We prior controlled this with feature flags. It's just better to define their
own crates.
2025-11-04 19:27:53 -05:00
Luke Parker
1b499edfe1 Misc fixes so this compiles 2025-11-04 18:56:56 -05:00
Luke Parker
642848bd24 Bump revm 2025-11-04 13:31:46 -05:00
Luke Parker
f7fb78bdd6 Merge branch 'next' into next-polkadot-sdk 2025-11-04 13:24:00 -05:00
Luke Parker
9c47ef2658 Restore deny exception for kayabaNerve/elliptic-curves, accidentally dropped when merging develop 2025-11-04 13:18:59 -05:00
Luke Parker
e1b6b638c6 Merge branch 'develop' into next 2025-11-04 13:14:38 -05:00
Luke Parker
c24768f922 Fix borks from the latest nightly
The `cargo doc` build started to fail with the rolling of `doc_auto_cfg` into
`doc_cfg`, so now we don't build docs for deps (as we can't reasonably update
`generic-array` at this time).

`home` has been patched as we are able to, not as a direct requirement of this
PR.
2025-11-04 13:10:11 -05:00
Luke Parker
65613750e1 Merge branch 'next' into next-polkadot-sdk 2025-11-04 12:06:13 -05:00
Luke Parker
87ee879dea doc_auto_cfg -> doc_cfg 2025-11-04 10:20:17 -05:00
Luke Parker
b5603560e8 Merge branch 'develop' into next 2025-11-04 10:19:38 -05:00
Luke Parker
5818f1a41c Update nightly version 2025-11-04 10:05:08 -05:00
Luke Parker
1b781b4b57 Fix CI 2025-10-07 04:39:32 -04:00
Luke Parker
94faf098b6 Update nightly version 2025-10-05 18:44:04 -04:00
Luke Parker
03e45f73cd Merge branch 'develop' into next 2025-10-05 18:43:53 -04:00
Luke Parker
63f7e220c0 Update macOS labels in CI due to deprecation of macos-13 2025-10-05 10:59:40 -04:00
Luke Parker
7d49366373 Move develop to patch-polkadot-sdk (#678)
* Update `build-dependencies` CI action

* Update `develop` to `patch-polkadot-sdk`

Allows us to finally remove the old `serai-dex/substrate` repository _and_
should have CI pass without issue on `develop` again.

The changes made here should be trivial and maintain all prior
behavior/functionality. The most notable are to `chain_spec.rs`, in order to
still use a SCALE-encoded `GenesisConfig` (avoiding `serde_json`).

* CI fixes

* Add `/usr/local/opt/llvm/lib` to paths on macOS hosts

* Attempt to use `LD_LIBRARY_PATH` in macOS GitHub CI

* Use `libp2p 0.56` in `serai-node`

* Correct Windows build dependencies

* Correct `llvm/lib` path on macOS

* Correct how macOS 13 and 14 have different homebrew paths

* Use `sw_vers` instead of `uname` on macOS

Yields the macOS version instead of the kernel's version.

* Replace hard-coded path with the intended env variable to fix macOS 13

* Add `libclang-dev` as dependency to the Debian Dockerfile

* Set the `CODE` storage slot

* Update to a version of substrate without `wasmtimer`

Turns out `wasmtimer` is WASM only. This should restore the node's functioning
on non-WASM environments.

* Restore `clang` as a dependency due to the Debian Dockerfile as we require a C++ compiler

* Move from Debian bookworm to trixie

* Restore `chain_getBlockBin` to the RPC

* Always generate a new key for the P2P network

* Mention every account on-chain before they publish a transaction

`CheckNonce` required accounts have a provider in order to even have their
nonce considered. This shims that by claiming every account has a provider at
the start of a block, if it signs a transaction.

The actual execution could presumably diverge between block building (which
sets the provider before each transaction) and execution (which sets the
providers at the start of the block). It doesn't diverge in our current
configuration and it won't be propagated to `next` (which doesn't use
`CheckNonce`).

Also uses explicit indexes for the `serai_abi::{Call, Event}` `enum`s.

* Adopt `patch-polkadot-sdk` with fixed peering

* Manually insert the authority discovery key into the keystore

I did try pulling in `pallet-authority-discovery` for this, updating
`SessionKeys`, but that was insufficient for whatever reason.

* Update to latest `substrate-wasm-builder`

* Fix timeline for incrementing providers

e1671dd71b incremented the providers for every
single transaction's sender before execution, noting the solution was fragile
but it worked for us at this time. It did not work for us at this time.

The new solution replaces `inc_providers` with direct access to the `Account`
`StorageMap` to increment the providers, achieving the desired goal, _without_
emitting an event (which is ordered, and the disparate order between building
and execution was causing mismatches of the state root).

This solution is also fragile and may also be insufficient. None of this code
exists anymore on `next` however. It just has to work sufficiently for now.

* clippy
2025-10-05 10:58:08 -04:00
Luke Parker
56f6ba2dac Merge branch 'develop' into next-polkadot-sdk 2025-10-05 06:04:17 -04:00
Luke Parker
55ed33d2d1 Update to a version of Substrate which no longer cites our fork of substrate-bip39 2025-09-30 19:30:40 -04:00
Luke Parker
138a0e9b40 Resolve https://github.com/serai-dex/serai/issues/680 2025-09-30 01:05:17 -04:00
Luke Parker
4fc7263ac3 Make simple_request::Client generic to the executor
Part of https://github.com/serai-dex/serai/issues/682.

We don't remove the use of `tokio::sync::Mutex` now as `hyper` pulls in
`tokio::sync` anyways, so there's no point in replacing it. This doesn't yet
solve TLS for non-`tokio` `Client`s.
2025-09-30 01:05:12 -04:00
Luke Parker
f27fd59fa6 Update documentation within modular-frost
Resolves https://github.com/serai-dex/serai/issues/675.
2025-09-30 00:27:29 -04:00
Luke Parker
08f6af8bb9 Remove borsh from dkg
It pulls in a lot of bespoke dependencies for little utility directly present.

Moves the necessary code into the processor.
2025-09-27 02:07:18 -04:00
Luke Parker
3512b3832d Don't actually define a pallet for pallet-session
We need its `Config`, not it as a pallet. As it has no storage, no calls, no
events, this is fine.
2025-09-26 23:08:38 -04:00
Luke Parker
1164f92ea1 Update usage of now-associated const in processor/key-gen 2025-09-26 22:48:52 -04:00
Luke Parker
0a3ead0e19 Add patches to remove the unused optional dependencies tracked in tree
Also performs the usual `cargo update`.
2025-09-26 22:47:47 -04:00
Luke Parker
437f0e9a93 Remove serdect by removing the unnecessary "alloc" feature from crypto-bigint
This only works for the legacy `crypto-bigint` and downstream consumers who
don't have the modern `serdect` pulled in for independent reasons.
2025-09-26 20:58:45 -04:00
Luke Parker
cc5d38f1ce dkg-evrf Security Proofs (#681)
* Add audit statement for `dkg-evrf`

This doesn't cover the implementation, solely the academia and background.

Also moves the existing audit of the `crypto` folder for organizational
reasons.

* Add files via upload
2025-09-26 11:20:48 -04:00
Luke Parker
0ce025e0c2 Update build-dependencies CI action 2025-09-21 15:40:58 -04:00
Luke Parker
ea66cd0d1a Update build-dependencies CI action 2025-09-21 15:40:15 -04:00
Luke Parker
8b32fba458 Minor cargo update 2025-09-21 15:40:05 -04:00
Luke Parker
e63acf3f67 Restore a runtime which compiles
Adds BABE, GRANDPA, to the runtime definition and a few stubs for not yet
implemented interfaces.
2025-09-21 13:16:43 -04:00
Luke Parker
d373d2a4c9 Restore AllowMint to serai-validator-sets-pallet and reorganize TODOs 2025-09-20 04:45:28 -04:00
Luke Parker
cbf998ff30 Restore report_slashes
This does not yet handle the `SlashReport`. It solely handles the routing for
it.
2025-09-20 04:16:01 -04:00
Luke Parker
ef07253a27 Restore the event to serai-validator-sets-pallet 2025-09-20 03:42:24 -04:00
Luke Parker
ffae6753ec Restore the set_keys call 2025-09-20 03:04:26 -04:00
Luke Parker
a04215bc13 Remove commented-out slashing code from serai-validator-sets-pallet
Deferred to https://github.com/serai-dex/serai/issues/657.
2025-09-20 03:04:19 -04:00
Luke Parker
28aea8a442 Incorporate check a validator won't prevent ever not having a single point of failure 2025-09-20 01:58:39 -04:00
Luke Parker
7b46477ca0 Add explicit hook for deciding whether to include the genesis validators 2025-09-20 01:57:55 -04:00
Luke Parker
e62b62ddfb Restore usage of pallet-grandpa to serai-validator-sets-pallet 2025-09-20 01:36:11 -04:00
Luke Parker
a2d8d0fd13 Restore integration with pallet-babe to serai-validator-sets-pallet 2025-09-20 01:23:02 -04:00
Luke Parker
b2b36b17c4 Restore GenesisConfig to the validator sets pallet 2025-09-20 00:06:19 -04:00
Luke Parker
9de8394efa Emit events within the signals pallet 2025-09-19 22:44:29 -04:00
Luke Parker
3cb9432daa Have the coins pallet emit events via serai_core_pallet
`serai_core_pallet` solely defines an accumulator for the events. We use the
traditional `frame_system::Events` to store them for now and enable retrieval.
2025-09-19 22:18:55 -04:00
Luke Parker
3f5150b3fa Properly define the core pallet instead of placing it within the runtime 2025-09-19 19:05:47 -04:00
Luke Parker
d74b00b9e4 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:09:22 -04:00
Luke Parker
224cf4ea21 Update monero-oxide to the branch with the new RPC
See https://github.com/monero-oxide/monero-oxide/pull/66.

Allows us to remove the shim `simple-request 0.1` we had to define as we now
have `simple-request 0.2` in tree.
2025-09-18 19:00:10 -04:00
Luke Parker
3955f92cc2 Merge branch 'next' into next-polkadot-sdk 2025-09-18 18:19:14 -04:00
Luke Parker
a9b1e5293c Support webpki-roots as a fallback in simple-request 2025-09-18 18:15:24 -04:00
Luke Parker
80009ab67f Tidy unused import 2025-09-18 17:49:37 -04:00
Luke Parker
df9fda2971 Fixes from errors in cherry-picked commits 2025-09-18 17:49:32 -04:00
Luke Parker
ca8afb83a1 simple-request 0.2.0 2025-09-18 17:41:31 -04:00
Luke Parker
18a9cf2535 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:41:31 -04:00
Luke Parker
10c126ad92 Misc updates 2025-09-18 17:41:25 -04:00
Luke Parker
19305aebc9 Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-18 17:06:57 -04:00
Luke Parker
be68e27551 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-18 17:06:42 -04:00
Luke Parker
d6d96fe8ff Correct std-shims feature flagging 2025-09-18 17:06:31 -04:00
Luke Parker
95909d83a4 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-18 17:06:05 -04:00
Luke Parker
3bd48974f3 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-18 17:05:19 -04:00
Luke Parker
29093715e3 Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-18 17:05:07 -04:00
Luke Parker
87b4dfc8f3 Expand std_shims::prelude to better match std::prelude 2025-09-18 17:04:54 -04:00
Luke Parker
4db78b1787 Add the ability to bound the response's size limit to simple-request 2025-09-18 17:04:41 -04:00
Luke Parker
02a5f15535 Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-18 17:04:10 -04:00
Luke Parker
a1ef18a039 Have simple-request return an error upon failing to find the system's root certificates 2025-09-18 17:03:16 -04:00
Luke Parker
bec806230a Misc updates 2025-09-18 16:25:33 -04:00
Luke Parker
8bafeab5b3 Tidy serai-signals-pallet
Adds `serai-validator-sets-pallet` and `serai-signals-pallet` to the runtime.
2025-09-16 08:45:02 -04:00
Luke Parker
3722df7326 Introduce KeyShares struct to represent the amount of key shares
Improvements, bug fixes associated.
2025-09-16 08:45:02 -04:00
Luke Parker
ddb8e1398e Finally make modular-frost work with alloc alone
Carries the update to `frost-schnorrkel` and `bitcoin-serai`.
2025-09-16 08:45:02 -04:00
Luke Parker
2be69b23b1 Tweak multiexp to compile on core
On `core`, it'll use a serial implementation of no benefit other than the fact
that when `alloc` _is_ enabled, it'll use the multi-scalar multiplication
algorithms.

`schnorr-signatures` was prior tweaked to include a shim for
`SchnorrSignature::verify` which didn't use `multiexp_vartime` yet this same
premise. Now, instead of callers writing these shims, it's within `multiexp`.
2025-09-16 08:45:02 -04:00
Luke Parker
a82ccadbb0 Correct std-shims feature flagging 2025-09-16 08:45:02 -04:00
Luke Parker
1ff2934927 cargo update 2025-09-16 08:44:54 -04:00
Luke Parker
cd4ffa862f Remove coins, validator-sets use of Substrate's event system
We've defined our own.
2025-09-15 21:32:20 -04:00
Luke Parker
c0a4d85ae6 Restore claim_deallocation call to validator-sets pallet 2025-09-15 21:32:01 -04:00
Luke Parker
55e845fe12 Expose std_shims::io on core
The `io::Write` trait is somewhat worthless, being implemented for nothing, yet
`Read` remains fully functional. This also allows using its polyfills _without_
requiring `alloc`.

Opportunity taken to make `schnorr-signatures` not require `alloc`.

This will require a version bump before being published due to newly requiring
the `alloc` feature be specified to maintain pre-existing behavior.

Enables resolving https://github.com/monero-oxide/monero-oxide/issues/48.
2025-09-15 21:24:10 -04:00
Luke Parker
5ea087d177 Add missing alloc feature to multiexp's use of zeroize
Fixes building `multiexp` without default features, without separately
specifying `zeroize` and adding the `alloc` feature.
2025-09-14 08:55:40 -04:00
Luke Parker
dd7dc0c1dc Add impl<R: Read> Read for &mut R to std_shims
Increases parity with `std::io`.
2025-09-12 18:26:27 -04:00
Luke Parker
c83fbb3e44 Expand std_shims::prelude to better match std::prelude 2025-09-12 18:24:56 -04:00
Luke Parker
befbbbfb84 Add the ability to bound the response's size limit to simple-request 2025-09-11 17:24:47 -04:00
Luke Parker
d0f497dc68 Latest patch-polkadot-sdk 2025-09-10 10:02:24 -04:00
Luke Parker
1b755a5d48 patch-polkadot-sdk enabling libp2p 0.56 2025-09-06 17:41:49 -04:00
Luke Parker
e5efcd56ba Make the MSRV lint more robust
The prior version would fail if the last entry in the final array was not
originally the last entry.
2025-09-06 14:43:21 -04:00
Luke Parker
5d60b3c2ae Update parity-db in serai-db
This synchronizes with an update to `patch-polkadot-sdk`.
2025-09-06 14:28:42 -04:00
Luke Parker
ae923b24ff Update `patch-polkadot-sdk
Allows using `libp2p 0.55`.
2025-09-06 14:04:55 -04:00
Luke Parker
d304cd97e1 Merge branch 'next' into next-polkadot-sdk 2025-09-06 04:26:10 -04:00
Luke Parker
2b56dcdf3f Update patch-polkadot-sdk for bug fixes, removal of is-terminal
Adds a deny entry for `is-terminal` to stop it from secretly reappearing.

Restores the `is-terminal` patch for `is_terminal_polyfill` to have one less
external dependency.
2025-09-06 04:25:21 -04:00
Luke Parker
865e351f96 Bitcoin 29.1
Benefits from `v2transport`, `mempoolfullrbf`, and potentially TRUC.
2025-09-06 04:09:39 -04:00
Luke Parker
ea275df26c Re-export curve25519_dalek::RistrettoPoint for dalek_ff_group::RistrettoPoint
Sacrifices a `Hash` implementation (inefficient and already shouldn't be used)
we appear to have only used in two files (which have been patched).
2025-09-05 17:40:44 -04:00
Luke Parker
90804c4c30 Update deny.toml 2025-09-05 14:08:04 -04:00
Luke Parker
46caca2f51 Update patch-polkadot-sdk to remove scale_info 2025-09-05 14:07:52 -04:00
Luke Parker
2077e485bb Add borsh impls for SignedEmbeddedEllipticCurveKeys 2025-09-05 07:21:07 -04:00
Luke Parker
28dbef8a1c Update to the latest patch-polkadot-sdk
Removes several dependencies.
2025-09-05 06:57:30 -04:00
Luke Parker
2216ade8c4 Tweak how prime-field normalizes to the even square root 2025-09-04 20:48:15 -04:00
Luke Parker
3541197aa5 Merge branch 'next' into next-polkadot-sdk 2025-09-03 16:44:26 -04:00
Luke Parker
5265cc69de hex-literal 1 2025-09-03 13:56:48 -04:00
Luke Parker
a141deaf36 Smash the singular Ciphersuite trait into multiple
This helps identify where the various functionalities are used, or rather, not
used. The `Ciphersuite` trait present in `patches/ciphersuite`, facilitating
the entire FCMP++ tree, only requires the markers _and_ canonical point
decoding. I've opened a PR to upstream such a trait into `group`
(https://github.com/zkcrypto/group/pull/68).

`WrappedGroup` is still justified for as long as `Group::generator` exists.
Moving `::generator()` to its own trait, on an independent structure (upstream)
would be massively appreciated. @tarcieri also wanted to update from
`fn generator()` to `const GENERATOR`, which would encourage further discussion
on https://github.com/zkcrypto/group/issues/32 and
https://github.com/zkcrypto/group/issues/45, which have been stagnant.

The `Id` trait is occasionally used yet really should be first off the chopping
block.

Finally, `WithPreferredHash` is only actually used around a third of the time,
which more than justifies it being a separate trait.

---

Updates `dalek_ff_group::Scalar` to directly re-export
`curve25519_dalek::Scalar`, as without issue. `dalek_ff_group::RistrettoPoint`
also could be replaced with an export of `curve25519_dalek::RistrettoPoint`,
yet the coordinator relies on how we implemented `Hash` on it for the hell of
it so it isn't worth it at this time. `dalek_ff_group::EdwardsPoint` can't be
replaced for an re-export of `curve25519_dalek::SubgroupPoint` as it doesn't
implement `zeroize`, `subtle` traits within a released, non-yanked version.
Relevance to https://github.com/serai-dex/serai/issues/201 and
https://github.com/dalek-cryptography/curve25519-dalek/issues/811#issuecomment-3247732746.

Also updates the `Ristretto` ciphersuite to prefer `Blake2b-512` over
`SHA2-512`. In order to maintain compliance with FROST's IETF standard,
`modular-frost` defines its own ciphersuite for Ristretto which still uses
`SHA2-512`.
2025-09-03 13:50:20 -04:00
Luke Parker
215e41fdb6 Remove deprecated APIs from dalek-ff-group
For backwards compatibility, we now use as a patch (as prior done with
`ciphersuite`).

Removes `crypto-bigint 0.5` from the tree and shapes up what the next release
will look like.
2025-09-03 07:05:50 -04:00
Luke Parker
41c34d7f11 Remove crypto-bigint from the public API of prime-field 2025-09-03 07:05:45 -04:00
Luke Parker
974bc82387 Remove unnecessary to_string for clone 2025-09-03 06:11:32 -04:00
Luke Parker
47ef24a7cc Remove unused patch for parking_lot_core 2025-09-03 06:11:32 -04:00
Luke Parker
a2209dd6ff Misc clippy fixes 2025-09-03 06:10:54 -04:00
Luke Parker
2032cf355f Expose coins::Pallet::transfer_internal as transfer_fn
It is safe to call and assumes no preconditions.
2025-09-03 00:48:17 -04:00
Luke Parker
fe41b09fd4 Properly handle the error in validator-sets 2025-09-02 11:07:45 -04:00
Luke Parker
74bad049a7 Add abstraction for the embedded elliptic curve keys
It's minimal but still pleasant.
2025-09-02 10:42:06 -04:00
Luke Parker
72fefb3d85 Strongly type EmbeddedEllipticCurveKeys
Adds a signed variant to validate knowledge and ownership.

Add SCALE derivations for `EmbeddedEllipticCurveKeys`
2025-09-02 10:42:02 -04:00
Luke Parker
200c1530a4 WIP changes to validator-sets
Actually use the added `Allocations` abstraction

Start using the sessions API in the validator-sets pallet

Get a `substrate/validator-sets` approximate to compiling
2025-09-02 10:41:58 -04:00
Luke Parker
5736b87b57 Remove final references to scale in coordinator/processor
Slight tweaks to processor
2025-09-02 10:41:55 -04:00
Luke Parker
ada94e8c5d Get all processors to compile again
Requires splitting `serai-cosign` into `serai-cosign` and `serai-cosign-types`
so the processor don't require `serai-client/serai` (not correct yet).
2025-09-02 02:17:10 -04:00
Luke Parker
75240ed327 Update serai-message-queue to the new serai-primitives 2025-09-02 02:17:10 -04:00
Luke Parker
6177cf5c07 Have serai-runtime compile again 2025-09-02 02:17:10 -04:00
Luke Parker
0d38dc96b6 Use serai-primitives, not serai-client, when possible in coordinator/*
Also updates `serai-coordinator-tributary` to prefer `borsh` to SCALE.
2025-09-02 02:17:10 -04:00
Luke Parker
e8094523ff Use borsh instead of SCALE within tendermint-machine, tributary-sdk
Not only does this follow our general practice, the latest SCALE has a
possibly-lossy truncation in its current implementation for `enum`s I'd like to
avoid without simply silencing.
2025-09-02 02:17:09 -04:00
Luke Parker
53a64bc7e2 Update serai-abi, and dependencies, to patch-polkadot-sdk 2025-09-02 02:17:09 -04:00
Luke Parker
c0e48867e1 Merge branch 'develop' into next 2025-09-01 21:22:28 -04:00
Luke Parker
0066b94d38 Update substrate-wasm-builder from serai/polkadot-sdk to serai/patch-polkadot-sdk
Steps towards allowing us to delete the `serai/polkadot-sdk` repository.
2025-09-01 21:07:11 -04:00
Luke Parker
7d54c02ec6 Update to latest nightly
Replaces #671 due to a lint being triggered.
2025-09-01 16:48:34 -04:00
Mohan
568324f631 fix(spec): svm version mismatch in docs; document foundryup (#665)
* fix(spec): Change svm version in docs to 0.8.26

* fix(spec): add instructions for using foundryup
2025-09-01 15:58:59 -04:00
Mohan
2a02a8dc59 fix(spec): svm version mismatch in docs; document foundryup (#665)
* fix(spec): Change svm version in docs to 0.8.26

* fix(spec): add instructions for using foundryup
2025-09-01 15:57:19 -04:00
Luke Parker
eaa9a0e5a6 Pin actions in the pages workflow 2025-09-01 15:55:17 -04:00
Luke Parker
251996c1b0 Use solc 0.8.26
`next` already does, and it's annoying to have to consistently switch between
the two branches.
2025-09-01 15:55:06 -04:00
Luke Parker
98b9cc82a7 Fix Some(_) which should be Ok(_) 2025-09-01 15:42:47 -04:00
Luke Parker
3c6e889732 Update Cargo.lock after rebase 2025-08-30 19:36:46 -04:00
Luke Parker
354efc0192 Add deallocate function to validator-sets session abstraction 2025-08-30 18:34:20 -04:00
Luke Parker
e20058feae Add a Sessions abstraction for validator-sets storage 2025-08-30 18:34:20 -04:00
Luke Parker
09f0714894 Add a dedicated Allocations struct for managing validator set allocations
Part of the DB abstraction necessary for this spaghetti.
2025-08-30 18:34:15 -04:00
Luke Parker
d3d539553c Restore the coins pallet to the runtime 2025-08-30 18:32:26 -04:00
Luke Parker
b08ae8e6a7 Add a non-canonical SCALE derivations feature
Enables representing IUMT within `StorageValues`. Applied to a variety of
values.

Fixes a bug where `Some([0; 32])` would be considered a valid block anchor.
2025-08-30 18:32:21 -04:00
Luke Parker
35db2924b4 Populate UnbalancedMerkleTrees in headers 2025-08-30 18:32:20 -04:00
Luke Parker
bfff823bf7 Add an UnbalancedMerkleTree primitive
The reasoning for it is documented with itself. The plan is to use it within
our header for committing to the DAG (allowing one header per epoch, yet
logarithmic proofs for any header within the epoch), the transactions
commitment (allowing logarithmic proofs of a transaction within a block,
without padding), and the events commitment (allowing logarithmic proofs of
unique events within a block, despite events not having a unique ID inherent).

This also defines transaction hashes and performs the necessary modifications
for transactions to be unique.
2025-08-30 18:32:16 -04:00
Luke Parker
352af85498 Use borsh entirely in create_db 2025-08-30 18:32:07 -04:00
Luke Parker
ecad89b269 Remove now-consolidated primitives crates 2025-08-30 18:32:06 -04:00
Luke Parker
48f5ed71d7 Skeleton ruintime with new types 2025-08-30 18:30:38 -04:00
Luke Parker
ed9cbdd8e0 Have apply return Ok even if calls failed
This ensures fees are paid, and block building isn't interrupted, even for TXs
which error.
2025-08-30 18:27:23 -04:00
Luke Parker
0ac11defcc Serialize BoundedVec not with a u32 length, but the minimum-viable uN where N%8==0
This does break borsh's definition of a Vec EXCEPT if the BoundedVec is
considered an enum. For sufficiently low bounds, this is viable, though it
requires automated code generation to be sane.
2025-08-30 18:27:23 -04:00
Luke Parker
24e89316d5 Correct distinction/flow of check/validate/apply 2025-08-30 18:27:23 -04:00
Luke Parker
3f03dac050 Make transaction an enum of Unsigned, Signed 2025-08-30 18:27:23 -04:00
Luke Parker
820b710928 Remove RuntimeCall from Transaction
I believe this was originally here as we needed to return a reference, not an
owned instance, so this caching enabled returning a reference? Regardless, it
isn't valuable now.
2025-08-30 18:27:23 -04:00
Luke Parker
88c7ae3e7d Add traits necessary for serai_abi::Transaction to be usable in-runtime 2025-08-30 18:27:22 -04:00
Luke Parker
dd5e43760d Add the UNIX timestamp (in milliseconds to the block
This is read from the BABE pre-digest when converting from a SubstrateHeader.
This causes the genesis block to have time 0 and all blocks produced with BABE
to have a time of the slot time. While the slot time is in 6-second intervals
(due to our target block time), defining in milliseconds preserves the ABI for
long-term goals (sub-second blocks).

Usage of the slot time deduplicates this field with BABE, and leaves the only
possible manipulation to propose during a slot or to not propose during a slot.

The actual reason this was implemented this way is because the Header trait is
overly restrictive and doesn't allow definition with new fields. Even if we
wanted to express the timestamp within the SubstrateHeader, we can't without
replacing Header::new and making a variety of changes to the polkadot-sdk
accordingly. Those aren't worth it at this moment compared to the solution
implemented.
2025-08-30 18:27:09 -04:00
Luke Parker
776e417fd2 Redo primitives, abi
Consolidates all primitives into a single crate. We didn't benefit from its
fragmentation. I'm hesitant to say the new internal-organization is better (it
may be just as clunky), but it's at least in a single crate (not spread out
over micro-crates).

The ABI is the most distinct. We now entirely own it. Block header hashes don't
directly commit to any BABE data (avoiding potentially ~4 KB headers upon
session changes), and are hashed as borsh (a more widely used codec than
SCALE). There are still Substrate variants, using SCALE and with the BABE data,
but they're prunable from a protocol design perspective.

Defines a transaction as a Vec of Calls, allowing atomic operations.
2025-08-30 18:26:37 -04:00
Luke Parker
2f8ce15a92 Update deny, rust-src component 2025-08-30 18:25:02 -04:00
Luke Parker
af56304676 Update the git tags
Does no actual migration work. This allows establishing the difference in
dependencies between substrate and polkadot-sdk/substrate.
2025-08-30 18:23:49 -04:00
Luke Parker
62a2c4f20e Update nightly version 2025-08-30 18:22:48 -04:00
Luke Parker
c69841710a Remove unnecessary to_string for clone 2025-08-30 18:08:08 -04:00
Luke Parker
3158590675 Remove unused patch for parking_lot_core 2025-08-30 16:20:29 -04:00
Luke Parker
263d75d380 Pin actions in the pages workflow 2025-08-30 15:48:56 -04:00
Luke Parker
030185c7fc Pin the nightly version used within the no-std tests 2025-08-30 15:40:36 -04:00
Luke Parker
e2dc5db7aa Various feature tweaks and updates 2025-08-29 06:42:37 -04:00
Luke Parker
90bc364f9f Replace Ciphersuite::hash_to_F
The prior-present `Ciphersuite::hash_to_F` was a sin. Implementations took a
DST, yet were not require to securely handle it. It was also biased towards the
requirements of `modular-frost` as `ciphersuite` was originally written all
those years ago, when `modular-frost` had needs exceeding what `ff`, `group`
satisfied.

Now, the hash is bound to produce an output which can be converted to a scalar
with `ff::FromUniformBytes`. A new `hash_to_F`, which accepts a single argument
of the value to hash (removing the potential to insecurely handle the DST by
removing the DST entirely). Due to `digest` yielding a `GenericArray`, yet
`FromUniformBytes` taking a `const usize`, the `ciphersuite` crate now defines
a `FromUniformBytes` trait taking an array (then implemented for all satisfiers
of `ff::FromUniformBytes`). In order to get the array type from the
`GenericArray`, the output of the hash, `digest` is updated to the `0.11`
release candidate which moves to `flexible-array` which solves that problem.

The existing, specific `hash_to_F` functions have been moved to `modular-frost`
as necessary.

`flexible-array` itself is patched to a fork due to
https://github.com/RustCrypto/hybrid-array/issues/131.
2025-08-29 05:21:43 -04:00
Luke Parker
a4811c9a41 Tag dalek-ff-group 0.4.6 2025-08-29 02:07:29 -04:00
Luke Parker
12cfa6b2a5 Differentiate no-std from alloc within tests/no-std
Fixes `no-std` builds for packages which intended to be `no-std` (without
`alloc`).

Updates a variety of MSRVs to 1.73 due to `flexible-transcript` no longer using
`std-shims` to achieve 1.66 (as `std-shims` requires `alloc`). A future
improvement would be for `std-shims` to have an `alloc` feature and only
provide MSRV shims without it.
2025-08-29 01:23:18 -04:00
Luke Parker
0c71b6fc4d Fix 32-bit, no-std builds of crypto limbs 2025-08-29 01:04:40 -04:00
Luke Parker
ffe1b60a11 Move the contents of the evrf/ folder to the crypto/ folder
It was justified when it had several libraries, which it no longer does thanks
to the upstreaming with monero-oxide.
2025-08-29 00:25:09 -04:00
Luke Parker
5526b8d439 Use SEC1 for the encoding of secq256k1 points, like secp256k1 does 2025-08-28 23:52:05 -04:00
Luke Parker
beac35c119 Introduce the complete point addition formulas to short-weierstrass 2025-08-28 23:16:16 -04:00
Luke Parker
62bb75e09a Move secq256k1 to short-weierstrass 2025-08-28 23:07:22 -04:00
Luke Parker
45bd376c08 Fix handling of prime/composite-order curves within short-weierstrass 2025-08-28 22:31:33 -04:00
Luke Parker
da190759a9 Move embedwards25519 over to short-weierstrass 2025-08-28 22:08:17 -04:00
Luke Parker
f2d399ba1e Add crate for working with short Weierstrass elliptic curves 2025-08-28 08:20:31 -04:00
Luke Parker
220bcbc592 Add prime-field crate
prime-field introduces a macro to generate a prime field, in its entitrety,
de-duplicating code across minimal-ed448, embedwards25519, and secq256k1.
2025-08-28 03:36:15 -04:00
Luke Parker
85949f4b04 Update from kayabaNerve/monero-oxide to monero-oxide/monero-oxide 2025-08-28 01:09:18 -04:00
Luke Parker
f8adfb56ad Remove unwrap within debug assertion 2025-08-26 23:15:58 -04:00
Luke Parker
2f833dec77 Add job to competently check MSRVs
The prior workflow (now deleted) required manually specifying the packages to
check and only checked the package could compile under the stated MSRV. It
didn't verify it was actually the _minimum_ supported Rust version. The new
version finds the MSRV from scratch to check if the stated MSRV aligns.

Updates stated MSRVs accordingly.

Also removes many explicit dependencies from secq256k1 for their re-exports via
k256. Not directly relevant, just part of tidying up all the `toml`s.
2025-08-26 14:13:00 -04:00
Luke Parker
e3e41324c9 Update licenses 2025-08-25 10:06:35 -04:00
Luke Parker
6ed7c5d65e Update dependencies always built with optimizations 2025-08-25 09:50:13 -04:00
Luke Parker
9dddfd91c8 Fix clippy, update old dependencies 2025-08-25 09:17:29 -04:00
Luke Parker
c24b694fb2 Correct secq256k1/embedwards25519 Zeroize implementations 2025-08-25 05:06:56 -04:00
Luke Parker
738babf7e9 dkg-evrf crate
monero-oxide relies on ciphersuite, which is in-tree, yet we've made breaking
changes since. This commit adds a patch so
monero-oxide -> patches/ciphersuite -> crypto/ciphersuite, with
patches/ciphersuite resolving the breaking changes.
2025-08-25 04:49:54 -04:00
Luke Parker
33faa53b56 Remove dleq, dkg-promote, dkg-pedpop per #597
Does not move them to a new repository at this time.
2025-08-24 21:40:18 -04:00
Luke Parker
8c366107ae Merge branch 'develop' into next
This resolves the conflicts and gets the workspace `Cargo.toml`s to not be
invalid. It doesn't actually get clippy to pass again yet.

Does move `crypto/dkg/src/evrf` into a new `crypto/dkg/evrf` crate (which does
not yet compile).
2025-08-23 15:05:13 -04:00
Luke Parker
7a790f3a20 ff/alloc when ciphersuite/alloc 2025-08-23 11:00:05 -04:00
Luke Parker
a7c77f8b5f repr(transparent) on dalek_ff_group::FieldElement 2025-08-23 05:17:43 -04:00
Luke Parker
da3095ed15 Remove FieldElement::from_square
The new `FieldElement::from_u256` is sufficient to load an unreduced value. The
caller can perform the square themselves, without us explicitly supporting this
special case.

Updates the monero-oxide version used to one which no longer uses
`FieldElement::from_square` (as their use is why it was added).
2025-08-22 18:42:43 -04:00
Luke Parker
758d422595 Have <ed448::Point as Zeroize>::zeroize yield a well-defined value 2025-08-20 08:14:00 -04:00
Luke Parker
9841061b49 Add missing feature in substrate/client 2025-08-20 06:38:25 -04:00
Luke Parker
4122a0135f Fix dirty Cargo.lock 2025-08-20 05:20:47 -04:00
Luke Parker
b63ef32864 Smash Ciphersuite definitions into their own crates
Uses dalek-ff-group for Ed25519 and Ristretto. Uses minimal-ed448 for Ed448.
Adds ciphersuite-kp256 for Secp256k1 and P-256.
2025-08-20 05:12:36 -04:00
Luke Parker
8be03a8fc2 Fix dirty lockfile 2025-08-20 01:15:56 -04:00
Luke Parker
677a2e5749 Fix zeroization timeline in multiexp, cargo machete 2025-08-20 00:35:56 -04:00
Luke Parker
38bda1d586 dalek_ff_group::FieldElement: FromUniformBytes<64> 2025-08-20 00:23:39 -04:00
Luke Parker
2bc2ca6906 Implement FromUniformBytes<64> for dalek_ff_group::Scalar 2025-08-20 00:06:07 -04:00
Luke Parker
900a6612d7 Use std-shims to reduce flexible-transcript MSRV to 1.66
flexible-transcript already had a shim to support <1.66. This was irrelevant
since flexible-transcript had a MSRV of 1.73. Due to how clunky it was, it has
been removed despite theoretically enabling an even lower MSRV.
2025-08-19 23:43:26 -04:00
Luke Parker
17c1d5cd6b Tweak multiexp to Zeroize points when invoked in constant time, not just scalars 2025-08-19 22:28:59 -04:00
Luke Parker
8a1b56a928 Make the transcript dependency optional for schnorr-signatures
It's only required when aggregating.
2025-08-19 21:50:58 -04:00
Luke Parker
75964cf6da Place Schnorr signature aggregation behind a feature flag 2025-08-19 21:45:59 -04:00
Luke Parker
d407e35cee Fix Ciphersuite feature flagging 2025-08-19 21:42:25 -04:00
Luke Parker
c8ef044acb Version bump std-shims 2025-08-19 21:01:14 -04:00
Luke Parker
ddbc32de4d Update ciphersuite/dkg MSRVs 2025-08-19 18:20:19 -04:00
Luke Parker
e5ccfac19e Replace bespoke LazyLock/OnceLock with spin re-exports
Presumably notably slower on platforms with std, yet only when compiled with old
versions of Rust for which the option is this or no support anyways.
2025-08-19 18:10:33 -04:00
Luke Parker
432daae1d1 Polyfill extension traits for div_ceil and io::Error::other 2025-08-19 18:04:29 -04:00
Luke Parker
da3a85efe5 Only drop OnceLock value if initialized 2025-08-19 17:50:04 -04:00
Luke Parker
1e0240123d Shim LazyLock when before 1.70 2025-08-19 17:40:19 -04:00
Luke Parker
f6d4d1b084 Remove unused import, fix dirty Cargo.lock 2025-08-19 16:24:19 -04:00
Luke Parker
1b37dd2951 Shim std::sync::LazyLock for Rust < 1.80
Allows downgrading some crypto crates' MSRV to 1.79 as well.
2025-08-19 16:15:44 -04:00
Luke Parker
f32e0609f1 Add warning to dalek-ff-group 2025-08-19 15:25:40 -04:00
Luke Parker
ca85f9ba0c Remove the poorly-designed reduce_512 API
Unused and unpublished. This was only added in the FCMP++ branch as a quick fix
for performance reasons. Finding a better API is still a tricky question, but
this API is _bad_.
2025-08-19 15:24:49 -04:00
Luke Parker
cfd1cb3a37 Add FieldElement::wide_reduce to dalek-ff-group 2025-08-19 13:48:54 -04:00
Luke Parker
f2c13a0040 Expose Once within std-shims, bump spin to 0.9
This is technically a semver break due to bumping spin to 0.10, with the types
from spin being directly exposed. Long-term, we should not directly expose spin
but instead have our own types which are thin wrappers around spin (clearly
defining our API and allowing upgrading internals without breaking semver).
2025-08-19 13:36:01 -04:00
Luke Parker
961f46bc04 Add const fn to create a dalek-ff-group FieldElement 2025-08-19 13:17:39 -04:00
Luke Parker
2c4de3bab4 Bump version of ff-group-tests 2025-08-19 12:51:16 -04:00
Luke Parker
95c30720d2 Update how x coordinates are handled in bitcoin-serai 2025-08-18 14:52:29 -04:00
Luke Parker
ceede14f5c Fix misc compilation errors 2025-08-18 14:52:29 -04:00
Luke Parker
5e60ea9718 Don't offset nonces yet negate to achieve an even Y coordinate
Replaces an iterative loop with an immediate result, if action is necessary.
2025-08-18 14:52:29 -04:00
Luke Parker
153f6f2f2f Update to a monero-oxide patched to dkg 0.6 2025-08-18 14:52:29 -04:00
Luke Parker
104c0d4492 Rename ThresholdKeys::secret_share to ThresholdKeys::original_secret_share 2025-08-18 14:52:29 -04:00
Luke Parker
7c8f13ab28 Raise flexible-transcript requirement as required 2025-08-18 14:52:29 -04:00
Luke Parker
cb0deadf9a Version bump flexible-transcript 2025-08-18 14:52:29 -04:00
Luke Parker
cb489f9cef Other version bumps 2025-08-18 14:52:29 -04:00
Luke Parker
cc662cb591 Version bumps, add necessary version specifications 2025-08-18 14:52:29 -04:00
Luke Parker
a8b8844e3f Fix MSRV for simple-request 2025-08-18 14:52:29 -04:00
Luke Parker
82b543ef75 Fix clippy lint for ed448 on optional compilation path 2025-08-18 14:52:29 -04:00
Luke Parker
72e80c1a3d Update everything which uses dkg to the new APIs 2025-08-18 14:52:29 -04:00
Luke Parker
b6edc94bcd Add dealer key generation crate 2025-08-18 14:52:29 -04:00
Luke Parker
cfce2b26e2 Update READMEs, targeting an 80-character line limit 2025-08-18 14:52:29 -04:00
Luke Parker
e87bbcda64 Have modular-frost compile again 2025-08-18 14:52:29 -04:00
Luke Parker
9f84adf8b3 Smash dkg into dkg, dkg-[recovery, promote, musig, pedpop]
promote and pedpop require dleq, which don't support no-std. All three should
be moved outside the Serai repository, per #597, as none are planned for use
and worth covering under our BBP.
2025-08-18 14:52:29 -04:00
Luke Parker
3919cf55ae Extend modular-frost to test with scaled and offset keys
The transcript transcripted the group key _plus_ the offset, when it should've
only transcripted the group key as the declared group key already had the
offset applied. This has been fixed.
2025-08-18 14:52:29 -04:00
Luke Parker
38dd8cb191 Support taking arbitrary linear combinations of signing keys, not just additive offsets 2025-08-18 14:52:29 -04:00
Luke Parker
f2563d39cb Correct crypto MSRVs 2025-08-18 14:52:29 -04:00
Luke Parker
15a9cbef40 git checkout -f next ./crypto
Proceeds to remove the eVRF DKG after, only keeping what's relevant to this
branch alone.
2025-08-18 14:52:29 -04:00
Luke Parker
078d6e51e5 Re-install python3 after removal to solve unmet dependencies 2025-08-15 16:17:31 -04:00
Luke Parker
6c33e18745 Explicitly install python3 to fix build-dependencies 2025-08-15 16:14:10 -04:00
Luke Parker
b743c9a43e Update Rust version
This causes the Serai node to compile and run again.
2025-08-15 15:26:16 -04:00
Luke Parker
0c2f2979a9 Remove monero-serai, migrating to monero-oxide 2025-08-15 11:45:20 -04:00
Luke Parker
971951a1a6 Add overflow-checks even on release, per good practice 2025-08-15 10:56:28 -04:00
Luke Parker
92d9e908cb Version bumps for packages that needed to be published for monero-oxide 2025-08-15 10:56:10 -04:00
Luke Parker
a32b97be88 Move to wasm32v1-none from wasm32-unknown-unknown
Works towards fixing how the Substrate node Docker image no longer works.
2025-08-15 10:55:05 -04:00
Luke Parker
e3809b2ff1 Remove unnecessary edits to Docker config in an attempt to fix the CI 2025-08-12 01:27:28 -04:00
Luke Parker
fd2d8b4f0a Use Rust 1.89 when installing bins via cargo, version pin svm-rs
svm-rs just released a new version requiring 1.89 to compile. This process to
not install _any_ software with 1.85 to minimize how many toolchains we have in
use.
2025-08-12 01:27:28 -04:00
Luke Parker
bc81614894 Attempt Docker 24 again 2025-08-12 01:27:28 -04:00
Luke Parker
8df5aa2e2d Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-12 01:27:28 -04:00
Luke Parker
b000740470 Docker 25 since 24 doesn't have an active tag anymore 2025-08-12 01:27:28 -04:00
Luke Parker
b9f554111d Attempt to use Docker 24
Long-shot premised on an old forum post on how downgrading to Docker 24 solved
their instance of the error we face, though our conditions for it are
presumably different.
2025-08-12 01:27:28 -04:00
Luke Parker
354c408e3e Stop using an older version of Docker 2025-08-12 01:27:28 -04:00
Luke Parker
df3b60376a Restore Debian 12 Bookworm over Debian 11 Bullseye 2025-08-12 01:27:28 -04:00
Luke Parker
8d209c652e Add missing "-4" arguments to wget 2025-08-12 01:27:28 -04:00
Luke Parker
9ddad794b4 Use wget -4 for the same reason as the prior commit 2025-08-12 01:27:28 -04:00
Luke Parker
b934e484cc Replace busybox wget with wget on alpine to attempt to resolve DNS issues
See https://github.com/alpinelinux/docker-alpine/issues/155.
2025-08-12 01:27:28 -04:00
Luke Parker
f8aee9b3c8 Add overflow-checks = true recommandation to monero-serai 2025-08-12 01:27:28 -04:00
Luke Parker
f51d77d26a Fix tweaked Substrate connection code in serai-client tests 2025-08-12 01:27:28 -04:00
Luke Parker
0780deb643 Use three separate commands within the Bitcoin Dockerfile to download the release
Attempts to debug which is failing, as right now, the command as a whole is within the CI.
2025-08-12 01:27:28 -04:00
Luke Parker
75c38560f4 Bookworm -> Bullseye, except for the runtime 2025-08-12 01:27:28 -04:00
Luke Parker
9f1c5268a5 Attempt downgrading Docker from 27 to 26 2025-08-12 01:27:28 -04:00
Luke Parker
35b113768b Attempt downgrading docker from .28 to .27 2025-08-12 01:27:28 -04:00
Luke Parker
f2595c4939 Tweak how subtrate-client tests waits to connect to the Monero node 2025-08-12 01:27:28 -04:00
Luke Parker
8fcfa6d3d5 Add dedicated error for when amounts aren't representable within a u64
Fixes the issue where _inputs_ could still overflow u64::MAX and cause a panic.
2025-08-12 01:27:28 -04:00
Luke Parker
54c9d19726 Have docker install set host 2025-08-12 01:27:28 -04:00
Luke Parker
25324c3cd5 Add uidmap dependency for rootless Docker 2025-08-12 01:27:28 -04:00
Luke Parker
ecb7df85b0 if: runner.os == 'Linux', with single quotes 2025-08-12 01:27:28 -04:00
Luke Parker
68c7acdbef Attempt using rootless Docker in CI via the setup-docker-action
Restores using ubuntu-latest.

Basically, at some point in the last year the existing Docker e2e tests started
failing. I'm unclear if this is an issue with the OS, the docker packages, or
what. This just tries to find a solution.
2025-08-12 01:27:28 -04:00
Luke Parker
8b60feed92 Normalize FROM AS casing in Dockerfiles 2025-08-12 01:27:28 -04:00
Luke Parker
5c895efcd0 Downgrade tests requiring Docker from Ubuntu latest to Ubuntu 22.04
Attempts to resolve containers immediately exiting for some specific test runs.
2025-08-12 01:27:28 -04:00
Luke Parker
60e55656aa deny --hide-inclusion-graph 2025-08-12 01:27:28 -04:00
Luke Parker
9536282418 Update which deb archive to use within the runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8297d0679d Update substrate to one with a properly defined panic handler as of modern Rust 2025-08-12 01:27:28 -04:00
Luke Parker
d9f854b08a Attempt to fix install of clang within runtime Dockerfile 2025-08-12 01:27:28 -04:00
Luke Parker
8aaf7f7dc6 Remove (presumably) unnecessary command to explicitly install python 2025-08-12 01:27:28 -04:00
Luke Parker
ce447558ac Update Rust versions used in orchestration 2025-08-12 01:27:28 -04:00
Luke Parker
fc850da30e Missing --allow-remove-essential flag 2025-08-12 01:27:28 -04:00
Luke Parker
d6f6cf1965 Attempt to force remove shim-signed to resolve 'unmet dependencies' issues with shim-signed 2025-08-12 01:27:28 -04:00
Luke Parker
4438b51881 Expand python packages explicitly installed 2025-08-12 01:27:28 -04:00
Luke Parker
6ae0d9fad7 Install cargo deny with Rust 1.85 and pin its version 2025-08-12 01:27:28 -04:00
Luke Parker
ad08b410a8 Pin cargo-machete to 0.8.0 to prevent other unexpected CI failures 2025-08-12 01:27:28 -04:00
Luke Parker
ec3cfd3ab7 Explicitly install python3 after removing various unnecessary packages 2025-08-12 01:27:28 -04:00
Luke Parker
01eb2daa0b Updated dated version of actions/cache 2025-08-12 01:27:28 -04:00
Luke Parker
885000f970 Add update, upgrade, fix-missing call to Ubuntu build dependencies
Attempts to fix a CI failure for some misconfiguration...
2025-08-12 01:27:28 -04:00
Luke Parker
4be506414b Install cargo machete with Rust 1.85
cargo machete now uses Rust's 2024 edition, and 1.85 was the first to ship it.
2025-08-12 01:27:28 -04:00
Luke Parker
1143d84e1d Remove msbuild from packages to remove when the CI starts
Apparently, it's no longer installed by default.
2025-08-12 01:27:28 -04:00
Luke Parker
336922101f Further harden decoy selection
It risked panicking if a non-monotonic distribution was returned. While the
provided RPC code won't return non-monotonic distributions, users are allowed
to define their own implementations and override the provided method. Said
implementations could omit this required check.
2025-08-12 01:27:28 -04:00
Luke Parker
ffa033d978 Clarify transcripting for Clsag::verify, Mlsag::verify, as with Clsag::sign 2025-08-12 01:27:28 -04:00
Luke Parker
23f986f57a Tweak the Substrate runtime as required by the Rust version bump performed 2025-08-12 01:27:28 -04:00
Luke Parker
bb726b58af Fix #654 2025-08-12 01:27:28 -04:00
Luke Parker
387615705c Fix #643 2025-08-12 01:27:28 -04:00
Luke Parker
c7f825a192 Rename Bulletproof::calculate_bp_clawback to Bulletproof::calculate_clawback 2025-08-12 01:27:28 -04:00
Luke Parker
d363b1c173 Fix #630 2025-08-12 01:27:28 -04:00
Luke Parker
d5077ae966 Respond to 13.1.1.
Uses Zeroizing for username/password in monero-simple-request-rpc.
2025-08-12 01:27:28 -04:00
Luke Parker
188fcc3cb4 Remove potentially-failing unchecked arithmetic operations for ones which error
In response to 9.13.3.

Requires a bump to Rust 1.82 to take advantage of `Option::is_none_or`.
2025-08-12 01:27:28 -04:00
Luke Parker
cbab9486c6 Clarify messages in non-debug assertions 2025-08-12 01:27:28 -04:00
Luke Parker
a5f4c450c6 Response to usage of unwrap in non-test code
This commit replaces all usage of `unwrap` with `expect` within
`networks/monero`, clarifying why the panic risked is unreachable. This commit
also replaces some uses of `unwrap` with solutions which are guaranteed not to
fail.

Notably, compilation on 128-bit systems is prevented, ensuring
`u64::try_from(usize::MAX)` will never panic at runtime.

Slight breaking changes are additionally included as necessary to massage out
some avoidable panics.
2025-08-12 01:27:28 -04:00
Luke Parker
4f65a0b147 Remove Clone from ClsagMultisigMask{Sender, Receiver}
This had ill-defined properties on Clone, as a mask could be sent multiple times
(unintended) and multiple algorithms may receive the same mask from a singular
sender.

Requires removing the Clone bound within modular-frost and expanding the test
helpers accordingly.

This was not raised in the audit yet upon independent review.
2025-08-12 01:27:28 -04:00
Luke Parker
feb18d64a7 Respond to 2 3
We now use `FrostError::InternalError` instead of a panic to represent the mask
not being set.
2025-08-12 01:27:28 -04:00
Luke Parker
cb1e6535cb Respond to 2 2 2025-08-12 01:27:28 -04:00
Luke Parker
6b8cf6653a Respond to 1.1 A2 (also cited as 2 1)
`read_vec` was unbounded. It now accepts an optional bound. In some places, we
are able to define and provide a bound (Bulletproofs(+)' `L` and `R` vectors).
In others, we cannot (the amount of inputs within a transaction, which is not
subject to any rule in the current consensus other than the total transaction
size limit). Usage of `None` in those locations preserves the existing
behavior.
2025-08-12 01:27:28 -04:00
Luke Parker
b426bfcfe8 Respond to 1.1 A1 2025-08-12 01:27:28 -04:00
Luke Parker
21ce50ecf7 Revert "Forward docker stderr to stdout in case stderr is being dropped for some reason"
This was intended for the monero-audit branch.
2025-08-10 20:53:09 -04:00
Luke Parker
a4ceb2e756 Forward docker stderr to stdout in case stderr is being dropped for some reason 2025-08-10 20:50:12 -04:00
Luke Parker
b59b1f59dd Remove ToB report PDF by request 2025-07-18 03:19:10 -04:00
Luke Parker
cc4a65e82a Add Trail of Bits audit of our Ethereum code 2025-07-12 03:29:56 -04:00
Luke Parker
eab5d9e64f Remove Mastodon link from README
Closes #662.
2025-07-12 03:29:21 -04:00
Luke Parker
4e0c58464f Update Router documentarion after following B2 (B1 redux) 2025-04-12 10:04:10 -04:00
Luke Parker
205da3fd38 Update the Ethereum processor to the Router messages including their on-chain address
This only updates the syntax. It does not yet actually route the address as
necessary.
2025-04-12 09:57:29 -04:00
Luke Parker
f7e63d4944 Have Router signatures additionally sign the Router's address (B2)
This slightly modifies the gas usage of the contract in a way breaking the
existing vector. A new, much simpler, vector has been provided instead.
2025-04-12 09:55:40 -04:00
Luke Parker
b5608fc3d2 Update dated documentation for verifySignature (B1) 2025-04-12 08:42:45 -04:00
Luke Parker
33018bf6da Explicitly ban the identity point as an Ethereum Schnorr public key (002)
This doesn't have a well-defined affine representation. k256's behavior,
mapping it to (0, 0), means this would've been rejected anyways (so this isn't
a change of any current behavior), but it's best not to rely on such an
implementation detail.
2025-04-12 08:38:06 -04:00
Luke Parker
bef90b2f1a Fix gas estimation discrepancy when gas isn't monotonic 2025-04-12 08:32:11 -04:00
Luke Parker
184c02714a alloy-core 1.0, alloy 0.14, revm 0.22 (001)
This moves to Rust 1.86 as were prior on Rust 1.81, and the new alloy
dependencies require 1.82.

The revm API changes were notable for us. Instead of relying on a modified call
instruction (with deep introspection into the EVM design), we now use the more
recent and now more prominent Inspector API. This:

1) Lets us perform far less introspection
2) Forces us to rewrite the gas estimation code we just had audited

Thankfully, it itself should be much easier to read/review, and our existing
test suite has extensively validated it.

This resolves 001 which was a concern for if/when this upgrade occurs. By doing
it now, with a dedicated test case ensuring the issue we would have had with
alloy-core 0.8 and `validate=false` isn't actively an issue, we resolve it.
2025-04-12 08:09:09 -04:00
Luke Parker
5a7b815e2e Update nightly version 2025-02-04 07:57:04 -05:00
Luke Parker
22e411981a Resolve clippy errors from recent merges 2025-01-30 05:04:28 -05:00
akildemir
11d48d0685 add Serai JSON-RPC methods (#627)
* add serai rpc methods

* fix machete & dex quote price api

* fix validators api

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2025-01-30 04:23:03 -05:00
akildemir
e4cc23b72d add economic security pallet tests (#623) 2025-01-30 04:19:12 -05:00
akildemir
52d853c8ba add validator sets pallet tests (#614)
* add validator sets pallet tests

* update tests with new types

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2025-01-30 04:16:19 -05:00
akildemir
9c33a711d7 add in instructions pallet tests (#608)
* add pallet tests

* set mock runtime AllowMint to correct type
2025-01-30 04:13:21 -05:00
Luke Parker
a275023cfc Finish merging in the develop branch 2025-01-30 03:14:24 -05:00
Luke Parker
258c02ff39 Merge branch 'develop' into next
This is an initial resolution of conflicts which does not work.
2025-01-30 00:56:29 -05:00
Luke Parker
3655dc723f Use clearer identity check in equality 2025-01-30 00:13:55 -05:00
Luke Parker
315d4fb356 Correct decoding identity for embedwards25519/secq256k1 2025-01-29 23:01:45 -05:00
Luke Parker
2bc880e372 Downstream the eVRF libraries from FCMP++
Also adds no-std support to secq256k1 and embedwards25519.
2025-01-29 22:29:40 -05:00
Luke Parker
19422de231 Ensure a non-zero fee in the Router OutInstruction gas fuzz test 2025-01-27 15:39:55 -05:00
Luke Parker
fa0dadc9bd Rename Deployer bytecode to initcode 2025-01-27 15:39:06 -05:00
Luke Parker
f004c8726f Remove unused library bytecode from ethereum-schnorr-contract 2025-01-27 15:38:44 -05:00
Luke Parker
835b5bb06f Split tests across a few files, fuzz generate OutInstructions
Tests successful gas estimation even with more complex behaviors.
2025-01-27 13:59:11 -05:00
Luke Parker
0484113254 Fix the ability for a malicious adversary to snipe ERC20s out via re-entrancy from the ERC20 contract 2025-01-27 13:07:35 -05:00
Luke Parker
17cc10b3f7 Test Execute result decoding, reentrancy 2025-01-27 13:01:52 -05:00
Luke Parker
7e01589fba Erc20::approve for DestinationType::Contract
This allows the CREATE code to bork without the Serai router losing access to
the coins in question. It does incur overhead on the deployed contract, which
now no longer just has to query its balance but also has to call the
transferFrom, but its a safer pattern and not a UX detriment.

This also improves documentation.
2025-01-27 11:58:39 -05:00
Luke Parker
f8c3acae7b Check the Router-deployed contracts' code 2025-01-27 07:48:37 -05:00
Luke Parker
0957460f27 Add supporting security commentary to Router.sol 2025-01-27 07:36:23 -05:00
Luke Parker
ea00ba9ff8 Clarified usage of CREATE
CREATE was originally intended for gas savings. While one sketch did move to
CREATE2, the security concerns around address collisions (requiring all init
codes not be malleable to achieve security) continue to justify this.

To resolve the gas estimation concerns raised in the prior commit, the
createAddress function has been made constant-gas.
2025-01-27 07:36:13 -05:00
Luke Parker
a9625364df Test createAddress
Benchmarks gas usage

Note the estimator needs to be updated as this is now variable-gas to the
state.
2025-01-27 05:37:56 -05:00
Luke Parker
75c6427d7c CREATE uses RLP, not ABI-encoding 2025-01-27 04:24:25 -05:00
Luke Parker
e742a6b0ec Test ERC20 OutInstructions 2025-01-27 02:08:01 -05:00
Luke Parker
5164a710a2 Redo gas estimation via revm
Adds a minimal amount of packages. Does add decent complexity. Avoids having
constants which aren't exact, due to things like the quadratic memory cost, and
the issues with such estimates accordingly.
2025-01-26 22:42:50 -05:00
Luke Parker
27c1dc4646 Test ETH address/code OutInstructions 2025-01-24 18:46:17 -05:00
Luke Parker
3892fa30b7 Test an empty execute 2025-01-24 17:13:36 -05:00
Luke Parker
ed599c8ab5 Have the Batch event encode the amount of results
Necessary to distinguish a bitvec with 1 results from a bitvec with 7 results.
2025-01-24 17:04:25 -05:00
Luke Parker
29bb5e21ab Take advantage of RangeInclusive for specifying filters' blocks 2025-01-24 07:44:47 -05:00
Luke Parker
604a4b2442 Add execute_tx to fill in missing test cases reliant on it 2025-01-24 07:33:36 -05:00
Luke Parker
977dcad86d Test the Router rejects invalid signatures 2025-01-24 07:22:43 -05:00
Luke Parker
cefc542744 Test SeraiKeyWasNone 2025-01-24 06:58:54 -05:00
Luke Parker
164fe9a14f Test Router's InvalidSeraiKey error 2025-01-24 06:41:24 -05:00
Luke Parker
f948881eba Simplify async code in in_instructions_unordered
Outsources fetching the ERC20 events to top_level_transfers_unordered.
2025-01-24 05:43:04 -05:00
Luke Parker
201b675031 Test ERC20 InInstructions 2025-01-24 03:45:04 -05:00
Luke Parker
3d44766eff Add ERC20 InInstruction test 2025-01-24 03:23:58 -05:00
Luke Parker
a63a86ba79 Test Ether InInstructions 2025-01-23 09:30:54 -05:00
Luke Parker
e922264ebf Add selector collisions to the IERC20 lib 2025-01-23 08:25:59 -05:00
Luke Parker
7e53eff642 Fix the async flow with the Router
It had sequential async calls with complexity O(n), with a variety of redundant
calls. There was also a constant of... 4? 5? for each item. Now, the total
sequence depth is just 3-4.
2025-01-23 06:16:58 -05:00
Luke Parker
669b8b776b Work on testing the Router
Completes the `Executed` enum in the router. Adds an `Escape` struct. Both are
needed for testing purposes.

Documents the gas constants in intent and reasoning.

Adds modernized tests around key rotation and the escape hatch.

Also updates the rest of the codebase which had accumulated errors.
2025-01-23 02:06:06 -05:00
Luke Parker
6508957cbc Make a proper nonReentrant modifier
A transaction couldn't call execute twice within a single TX prior. Now, it
can.

Also adds a bit more context to the escape hatch events/errors.
2025-01-23 00:04:44 -05:00
Luke Parker
373e794d2c Check the escaped to address has code set
Document choice not to use a confirmation flow there as well.
2025-01-22 22:45:51 -05:00
Luke Parker
c8f3a32fdf Replace custom read/write impls in router with borsh 2025-01-21 03:49:29 -05:00
Luke Parker
f690bf831f Remove old code still marked TODO 2025-01-19 02:36:34 -05:00
Luke Parker
0b30ac175e Restore workspace-wide clippy
Fixes accumulated errors in the Substrate code. Modifies the runtime build to
work with a modern clippy. Removes e2e tests from the workspace.
2025-01-19 02:27:35 -05:00
Luke Parker
47560fa9a9 Test manually implemented serializations in the Router lib 2025-01-19 00:45:26 -05:00
Luke Parker
9d57c4eb4d Downscope dependencies in serai-processor-ethereum-primitives, const-hex decode bytecode in ethereum-schnorr-contract 2025-01-19 00:16:50 -05:00
Luke Parker
642ba00952 Update Deployer README, 80-character line length 2025-01-19 00:03:56 -05:00
Luke Parker
3c9c12d320 Test the Deployer contract 2025-01-18 23:58:38 -05:00
Luke Parker
f6b52b3fd3 Maximum line length of 80 in Deployer.sol 2025-01-18 15:22:58 -05:00
Luke Parker
0d906363a0 Simplify and test deterministically_sign 2025-01-18 15:13:39 -05:00
Luke Parker
8222ce78d8 Correct accumulated errors in the processor 2025-01-18 12:41:57 -05:00
Luke Parker
cb906242e7 2025 nightly
Supersedes #640.
2025-01-18 12:41:25 -05:00
Luke Parker
2a19e9da93 Update to libp2p 0.54
This is the same libp2p Substrate uses as of
https://github.com/paritytech/polkadot-sdk/pull/6248.
2025-01-17 04:50:15 -05:00
Luke Parker
2226dd59cc Comment all dependencies in substrate/node
Causes the Cargo.lock to no longer include the substrate dependencies
(including its copy of libp2p).
2025-01-17 04:09:27 -05:00
Luke Parker
be2098d2e1 Remove Serai from the ConfirmDkgTask 2025-01-15 21:00:50 -05:00
Luke Parker
6b41f32371 Correct handling of InvalidNonce within the coordinator 2025-01-15 20:48:54 -05:00
Luke Parker
19b87c7f5a Add the DKG confirmation flow
Finishes the coordinator redo
2025-01-15 20:29:57 -05:00
Luke Parker
505f1b20a4 Correct re-attempts for the DKG Confirmation protocol
Also spawns the SetKeys task.
2025-01-15 17:49:41 -05:00
Luke Parker
8b52b921f3 Have the Tributary scanner yield DKG confirmation signing protocol data 2025-01-15 15:16:30 -05:00
Luke Parker
f36bbcba25 Flatten the map of preprocesses/shares, send Participant index with DkgParticipation 2025-01-15 14:24:51 -05:00
Luke Parker
167826aa88 Implement SeraiAddress <-> Participant mapping and add RemoveParticipant transactions 2025-01-15 12:51:35 -05:00
Luke Parker
bea4f92b7a Fix parity-db builds for the Coordinator 2025-01-15 12:10:11 -05:00
Luke Parker
7312fa8d3c Spawn PublishSlashReportTask
Updates it so that it'll try for every network instead of returning after any
network fails.

Uses the SlashReport type throughout the codebase.
2025-01-15 12:08:28 -05:00
Luke Parker
92a4cceeeb Spawn PublishBatchTask
Also removes the expectation Batches published via it are sent in an ordered
fashion. That won't be true if the signing protocols complete out-of-order (as
possible when we are signing them in parallel).
2025-01-15 11:21:55 -05:00
Luke Parker
3357181fe2 Handle sign::ProcessorMessage::[Preprocesses, Shares] 2025-01-15 10:47:47 -05:00
Luke Parker
7ce5bdad44 Don't add transactions for topics which have yet to be recognized 2025-01-15 07:01:24 -05:00
Luke Parker
0de3fda921 Further space out requests for cosigns from the network 2025-01-15 05:59:56 -05:00
Luke Parker
cb410cc4e0 Correct how we handle rounding errors within the penalty fn
We explicitly no longer slash stakes but we still set the maximum slash to the
allocated stake + the rewards. Now, the reward slash is bound to the rewards
and the stake slash is bound to the stake. This prevents an improperly rounded
reward slash from effecting a stake slash.
2025-01-15 02:46:31 -05:00
Luke Parker
6c145a5ec3 Disable offline, disruptive slashes
Reasoning commented in codebase
2025-01-14 11:44:13 -05:00
Luke Parker
a7fef2ba7a Redesign Slash/SlashReport types with a function to calculate the penalty 2025-01-14 07:51:39 -05:00
Luke Parker
291ebf5e24 Have serai-task warnings print with the name of the task 2025-01-14 02:52:26 -05:00
Luke Parker
5e0e91c85d Add tasks to publish data onto Serai 2025-01-14 01:58:26 -05:00
Luke Parker
b5a6b0693e Add a proper error type to ContinuallyRan
This isn't necessary. Because we just log the error, we never match off of it,
we don't need any structure beyond String (or now Debug, which still gives us
a way to print the error). This is for the ergonomics of not having to
constantly write `.map_err(|e| format!("{e:?}"))`.
2025-01-12 18:29:08 -05:00
Luke Parker
3cc2abfedc Add a task to publish slash reports 2025-01-12 17:47:48 -05:00
Luke Parker
0ce9aad9b2 Add flow to add transactions onto Tributaries 2025-01-12 07:32:45 -05:00
Luke Parker
e35aa04afb Start handling messages from the processor
Does route ProcessorMessage::CosignedBlock. Rest are stubbed with TODO.
2025-01-12 06:07:55 -05:00
Luke Parker
e7de5125a2 Have processor-messages use CosignIntent/SignedCosign, not the historic cosign format
Has yet to update the processor accordingly.
2025-01-12 05:52:33 -05:00
Luke Parker
158140c3a7 Add a proper error for intake_cosign 2025-01-12 05:49:17 -05:00
Luke Parker
df9a9adaa8 Remove direct dependencies of void, async-trait 2025-01-12 03:48:43 -05:00
Luke Parker
d854807edd Make message_queue::client::Client::send fallible
Allows tasks to report the errors themselves and handle retry in our
standardized way.
2025-01-11 21:57:58 -05:00
Luke Parker
f501d46d44 Correct disabling of Nagle's algorithm 2025-01-11 06:54:43 -05:00
Luke Parker
74106b025f Publish SlashReport onto the Tributary 2025-01-11 06:51:55 -05:00
Luke Parker
e731b546ab Update documentation 2025-01-11 05:13:43 -05:00
Luke Parker
77d60660d2 Move spawn_cosign from main.rs into tributary.rs
Also refines the tasks within tributary.rs a good bit.
2025-01-11 05:12:56 -05:00
Luke Parker
3c664ff05f Re-arrange coordinator/
coordinator/tributary was tributary-chain. This crate has been renamed
tributary-sdk and moved to coordinator/tributary-sdk.

coordinator/src/tributary was our instantion of a Tributary, the Transaction
type and scan task. This has been moved to coordinator/tributary.

The main reason for this was due to coordinator/main.rs becoming untidy. There
is now a collection of clean, independent APIs present in the codebase.
coordinator/main.rs is to compose them. Sometimes, these compositions are a bit
silly (reading from a channel just to forward the message to a distinct
channel). That's more than fine as the code is still readable and the value
from the cleanliness of the APIs composed far exceeds the nits from having
these odd compositions.

This breaks down a bit as we now define a global database, and have some APIs
interact with multiple other APIs.

coordinator/src/tributary was a self-contained, clean API. The recently added
task present in coordinator/tributary/mod.rs, which bound it to the rest of the
Coordinator, wasn't.

Now, coordinator/src is solely the API compositions, and all self-contained
APIs are their own crates.
2025-01-11 04:14:21 -05:00
Luke Parker
c05b0c9eba Handle Canonical, NewSet from serai-coordinator-substrate 2025-01-11 03:07:15 -05:00
Luke Parker
6d5049cab2 Move the task providing transactions onto the Tributary to the Tributary module
Slims down the main file a bit
2025-01-11 02:13:23 -05:00
Luke Parker
1419ba570a Route from tributary scanner to message-queue 2025-01-11 01:55:36 -05:00
Luke Parker
542bf2170a Provide Cosign/CosignIntent for Tributaries 2025-01-11 01:31:28 -05:00
Luke Parker
378d6b90cf Delete old Tributaries on reboot 2025-01-10 20:10:05 -05:00
Luke Parker
cbe83956aa Flesh out Coordinator main
Lot of TODOs as the APIs are all being routed together.
2025-01-10 02:24:24 -05:00
Luke Parker
091d485fd8 Have the Tributary scanner DB be distinct from the cosign DB
Allows deleting the entire Tributary scanner DB upon retiry.
2025-01-10 02:22:58 -05:00
Luke Parker
2a3eaf4d7e Wrap the entire Libp2p object in an Arc
Makes `Clone` calls significantly cheaper as now only the outer Arc is cloned
(the inner ones have been removed). Also wraps uses of Serai in an Arc as we
shouldn't actually need/want multiple caller connection pools.
2025-01-10 01:26:07 -05:00
Luke Parker
23122712cb Document validator jailing upon participation failures and slash report determination
These are TODOs. I just wanted to ensure this was written down and each seemed
too small for GH issues.
2025-01-09 19:50:39 -05:00
Luke Parker
47eb793ce9 Slash upon Tendermint evidence
Decoding slash evidence requires specifying the instantiated generic
`TendermintNetwork`. While irrelevant, that generic includes a type satisfying
`tributary::P2p`. It was only possible to route now that we've redone the P2P
API.
2025-01-09 06:58:00 -05:00
Luke Parker
9b0b5fd1e2 Have serai-cosign index finalized blocks' numbers 2025-01-09 06:57:26 -05:00
Luke Parker
893a24a1cc Better document bounds in serai-coordinator-p2p 2025-01-09 06:57:12 -05:00
Luke Parker
b101e2211a Complete serai-coordinator-p2p 2025-01-09 06:23:14 -05:00
Luke Parker
201a444e89 Remove tokio dependency from serai-coordinator-p2p
Re-implements tokio::mpsc::oneshot with a thin wrapper around async-channel.

Also replaces futures-util with futures-lite.
2025-01-09 02:16:05 -05:00
Luke Parker
9833911e06 Promote Request::Heartbeat from an enum variant to a struct 2025-01-09 01:41:42 -05:00
Luke Parker
465e8498c4 Make the coordinator's P2P modules their own crates 2025-01-09 01:26:25 -05:00
Luke Parker
adf20773ac Add libp2p module documentation 2025-01-09 00:40:07 -05:00
Luke Parker
295c1bd044 Document improper handling of session rotation in P2P allow list 2025-01-09 00:16:45 -05:00
Luke Parker
dda6e3e899 Limit each peer to one connection
Prevents dialing the same peer multiple times (successfully).
2025-01-09 00:06:51 -05:00
Luke Parker
75a00f2a1a Add allow_block_list to libp2p
The check in validators prevented connections from non-validators.
Non-validators could still participate in the network if they laundered their
connection through a malicious validator. allow_block_list ensures that peers,
not connections, are explicitly limited to validators.
2025-01-08 23:54:27 -05:00
Luke Parker
6cde2bb6ef Correct and document topic subscription 2025-01-08 23:16:04 -05:00
Luke Parker
20326bba73 Replace KeepAlive with ping
This is more standard and allows measuring latency.
2025-01-08 23:01:36 -05:00
Luke Parker
ce83b41712 Finish mapping Libp2p to the P2p trait API 2025-01-08 19:39:09 -05:00
Luke Parker
b2bd5d3a44 Remove Debug bound on tributary::P2p 2025-01-08 17:40:32 -05:00
Luke Parker
de2d6568a4 Actually implement the Peer abstraction for Libp2p 2025-01-08 17:40:08 -05:00
Luke Parker
fd9b464b35 Add a trait for the P2p network used in the coordinator
Moves all of the Libp2p code to a dedicated directory. Makes the Heartbeat task
abstract over any P2p network.
2025-01-08 17:01:37 -05:00
Luke Parker
376a66b000 Remove async-trait from tendermint-machine, tributary-chain 2025-01-08 16:41:11 -05:00
Luke Parker
2121a9b131 Spawn the task to select validators to dial 2025-01-07 18:17:36 -05:00
Luke Parker
419223c54e Build the swarm
Moves UpdateSharedValidatorsTask to validators.rs. While prior planned to
re-use a validators object across connecting and peer state management, the
current plan is to use an independent validators object for each to minimize
any contention. They should be built infrequently enough, and cheap enough to
update in the majority case (due to quickly checking if an update is needed),
that this is fine.
2025-01-07 18:09:25 -05:00
Luke Parker
a731c0005d Finish routing our own channel abstraction around the Swarm event stream 2025-01-07 16:51:56 -05:00
Luke Parker
f27e4e3202 Move the WIP SwarmTask to its own file 2025-01-07 16:34:19 -05:00
Luke Parker
f55165e016 Add channels to send requests/recv responses 2025-01-07 15:51:15 -05:00
Luke Parker
d9e9887d34 Run the dial task whenever we have a peer disconnect 2025-01-07 15:36:42 -05:00
Luke Parker
82e753db30 Document risk of eclipse in the dial task 2025-01-07 15:35:34 -05:00
Luke Parker
052388285b Remove TaskHandle::close
TaskHandle::close meant run_now may panic if the task was closed. Now, tasks
are only closed when all handles are dropped, causing all handles to point to
running tasks (ensuring run_now won't panic).
2025-01-07 15:26:41 -05:00
Luke Parker
47a4e534ef Update serai-processor-signers to VariantSignid::Batch([u8; 32]) 2025-01-07 15:26:23 -05:00
Luke Parker
257f691277 Start filling out message handling in SwarmTask 2025-01-05 01:23:28 -05:00
Luke Parker
c6d0fb477c Inline noise into OnlyValidators
libp2p does support (noise, OnlyValidators) but it'll interpret it as either,
not a chain. This will act as the desired chain.
2025-01-05 00:55:25 -05:00
Luke Parker
96518500b1 Don't hold the shared Validators write lock while making requests to Serai 2025-01-05 00:29:11 -05:00
Luke Parker
2b8f481364 Parallelize requests within Validators::update 2025-01-05 00:17:05 -05:00
Luke Parker
479ca0410a Add commentary on the use of FuturesOrdered 2025-01-04 23:28:54 -05:00
Luke Parker
9a5a661d04 Start on the task to manage the swarm 2025-01-04 23:28:29 -05:00
Luke Parker
3daeea09e6 Only let active Serai validators connect over P2P 2025-01-04 22:21:23 -05:00
Luke Parker
a64e2004ab Dial new peers when we don't have the target amount 2025-01-04 18:04:24 -05:00
Luke Parker
f9f6d40695 Use Serai validator keys as PeerIds 2025-01-04 18:03:37 -05:00
Luke Parker
4836c1676b Don't consider the Serai set in the cosigning protocol
The Serai set SHOULD be banned from setting keys so this SHOULD be unreachable.
It's now explicitly unreachable.
2025-01-04 13:52:17 -05:00
Luke Parker
985261574c Add gossip behavior back to the coordinator 2025-01-03 14:00:20 -05:00
Luke Parker
3f3b0255f8 Tweak heartbeat task to run less often if there's no progress to be made 2025-01-03 13:59:14 -05:00
Luke Parker
5fc8500f8d Add task to heartbeat a tributary to the P2P code 2025-01-03 13:04:27 -05:00
Luke Parker
49c221cca2 Restore request-response code to the coordinator 2025-01-03 13:02:50 -05:00
Luke Parker
906e2fb669 Start cosigning on Cosign or Cosigned, not just on Cosigned 2025-01-03 10:30:39 -05:00
Luke Parker
ce676efb1f cargo update 2025-01-03 07:01:06 -05:00
Luke Parker
0a611cb155 Further flesh out tributary scanning
Renames `label` to `round` since `Label` was renamed to `SigningProtocolRound`.

Adds some more context-less validation to transactions which used to be done
within the custom decode function which was simplified via the usage of borsh.

Documents in processor-messages where the Coordinator sends each of its
messages.
2025-01-03 06:57:28 -05:00
Luke Parker
bcd3f14f4f Start work on cleaning up the coordinator's tributary handling 2025-01-02 09:11:04 -05:00
Luke Parker
6272c40561 Restore block_hash to Batch
It's not only helpful (to easily check where Serai's view of the external
network is) but it's necessary in case of a non-trivial chain fork to determine
which blockchain Serai considers canonical.
2024-12-31 18:10:47 -05:00
Luke Parker
2240a50a0c Rebroadcast cosigns for the currently evaluated session, not the latest intended
If Substrate has a block 500 with a key gen, and a block 600 with a key gen,
and the session starting on 500 never cosigns everything, everyone up-to-date
will want the cosigns for the session starting on block 500. Everyone
up-to-date will also be rebroadcasting the non-existent cosigns for the session
which has yet to start. This wouldn't cause a stall as eventually, each
individual set would cosign the latest notable block, and then that would be
explicitly synced, but it's still not the intended behavior.

We also won't even intake the cosigns for the latest intended session if it
exceeds the session we're currently evaluating. This does mean those behind on
the cosigning protocol wouldn't have rebroadcasted their historical cosigns,
and now will, but that's valuable as we don't actually know if we're behind or
up-to-date (per above posited issue).
2024-12-31 17:17:12 -05:00
Luke Parker
7e2b31e5da Clean the transaction definitions in the coordinator
Moves to borsh for serialization. No longer includes nonces anywhere in the TX.
2024-12-31 12:14:32 -05:00
Luke Parker
8c9441a1a5 Redo coordinator's Substrate scanner 2024-12-31 10:37:19 -05:00
Luke Parker
5a42f66dc2 alloy 0.9 2024-12-30 11:09:09 -05:00
Luke Parker
b584a2beab Remove old DB entry from the scanner
We read from it but never writ to it.

It was used to check we didn't flag a block as notable after reporting it, but
it was called by the scan task as it scanned a block. We only batch/report
blocks after the scan task after scanning them, so it was very redundant.
2024-12-30 11:07:05 -05:00
Luke Parker
26ccff25a1 Split reporting Batches to the signer from the Batch test 2024-12-30 11:03:52 -05:00
Luke Parker
f0094b3c7c Rename Report task to Batch task 2024-12-30 10:49:35 -05:00
Luke Parker
458f4fe170 Move where we check if we should delay reporting of Batches 2024-12-30 10:18:38 -05:00
Luke Parker
1de8136739 Remove Session from VariantSignId::SlashReport
It's only there to make the VariantSignid unique across Sessions. By localizing
the VariantSignid to a Session, we avoid this, and can better ensure we don't
queue work for historic sessions.
2024-12-30 06:16:03 -05:00
Luke Parker
445c49f030 Have the scanner's report task ensure handovers only occur if Batchs are valid
This is incomplete at this time. The logic is fine, but needs to be moved to a
distinct location to handle singular blocks which produce multiple Batches.
2024-12-30 06:11:47 -05:00
Luke Parker
5b74fc8ac1 Merge ExternalKeyForSessionToSignBatch into InfoForBatch 2024-12-30 05:34:13 -05:00
Luke Parker
e67e301fc2 Have the processor verify the published Batches match expectations 2024-12-30 05:21:26 -05:00
Luke Parker
1d50792eed Document serai-db with bounds and intent 2024-12-26 02:35:32 -05:00
Luke Parker
9c92709e62 Delay cosign acknowledgments 2024-12-26 01:04:20 -05:00
Luke Parker
3d15710a43 Only check the cosign is after its start block if faulty
We don't have consensus on the session's last block, so we shouldn't check if
the cosign is before the session ends. What matters is that network, within its
set, claims it's still active at that block (on its view of the blockchain).
2024-12-26 00:26:48 -05:00
Luke Parker
df06da5552 Only check if the cosign is stale if it isn't faulty
If it is faulty, we want to archive it regardless.
2024-12-26 00:24:48 -05:00
Luke Parker
cef5bc95b0 Revert prior commit
An archive of all GlobalSessions is necessary to check for faults. The storage
cost is also minimal. While it should be avoided if it can be, it can't be
here.
2024-12-26 00:15:49 -05:00
Luke Parker
f336ab1ece Remove GlobalSessions DB entry
If we read the currently-being-evaluated session from the evaluator, we can
avoid paying the storage costs on all sessions ad-infinitum.
2024-12-25 23:57:51 -05:00
Luke Parker
2aebfb21af Remove serai from the cosign evaluator 2024-12-25 23:47:21 -05:00
Luke Parker
56af6c44eb Remove usage of serai from intake_cosign 2024-12-25 21:19:04 -05:00
Luke Parker
4b34be05bf rocksdb 0.23 2024-12-25 19:48:48 -05:00
Luke Parker
5b337c3ce8 Prevent a malicious validator set from overwriting a notable cosign
Also prevents panics from an invalid Serai node (removing the assumption of an
honest Serai node).
2024-12-25 02:11:05 -05:00
Luke Parker
e119fb4c16 Replace Cosigns by extending NetworksLatestCosignedBlock
Cosigns was an archive of every single cosign ever received. By scoping
NetworksLatestCosignedBlock to be by the global session, we have the latest
cosign for each network in a session (valid to replace all prior cosigns by
that network within that session, even for the purposes of fault) and
automatically have the notable cosigns indexed (as they are the latest ones
within their session). This not only saves space yet also allows optimizing
evaluation a bit.
2024-12-25 01:45:37 -05:00
Luke Parker
ef972b2658 Add cosign signature verification 2024-12-25 00:06:46 -05:00
Luke Parker
4de1a5804d Dedicated library for intending and evaluating cosigns
Not only cleans the existing cosign code but enables non-Serai-coordinators to
evaluate cosigns if they gain access to a feed of them (such as over an RPC).
This would let centralized services not only track the finalized chain yet the
cosigned chain without directly running a coordinator.

Still being wrapped up.
2024-12-22 06:41:55 -05:00
Luke Parker
147a6e43d0 Split task from serai-processor-primitives into serai-task 2024-12-19 10:08:13 -05:00
Luke Parker
066aa9eda4 cargo update
Resolves RUSTSEC-2024-0421
2024-12-12 00:45:19 -05:00
Luke Parker
9593a428e3 alloy 0.8 2024-12-11 01:02:58 -05:00
Luke Parker
5b3c5ec02b Basic Ethereum escapeHatch test 2024-12-09 02:00:17 -05:00
Luke Parker
9ccfa8a9f5 Fix deny 2024-12-08 22:01:43 -05:00
Luke Parker
18897978d0 thiserror 2.0, cargo update 2024-12-08 21:55:37 -05:00
Luke Parker
3192370484 Add Serai key confirmation to prevent rotating to an unusable key
Also updates alloy to the latest version
2024-12-08 20:42:37 -05:00
Luke Parker
8013c56195 Add/correct msrv labels 2024-12-08 18:27:15 -05:00
Luke Parker
834c16930b Add a bitmask of OutInstruction events to Executed
Allows explorers to provide clarity on what occurred.
2024-11-02 21:00:01 -04:00
Luke Parker
2920987173 Add a re-entrancy guard to Router.execute 2024-11-02 20:12:48 -04:00
Luke Parker
26230377b0 Define IRouterWithoutCollisions which Router inherits from
This ensures Router implements most of IRouterWithoutCollisions. It solely
leaves us to confirm Router implements the extensions defined in IRouter.
2024-11-02 19:10:39 -04:00
Luke Parker
2f5c0c68d0 Add selector collisions to Router to make it IRouter compatible 2024-11-02 18:13:02 -04:00
Luke Parker
8de42cc2d4 Add IRouter 2024-11-02 13:19:07 -04:00
Luke Parker
cf4123b0f8 Update how signatures are handled by the Router 2024-11-02 10:47:09 -04:00
Luke Parker
6a520a7412 Work on testing the Router 2024-10-31 02:23:59 -04:00
Luke Parker
b2ec58a445 Update serai-ethereum-processor to compile 2024-10-30 21:48:40 -04:00
Luke Parker
8e800885fb Simplify deterministic signing process in serai-processor-ethereum-primitives
This should be easier to specify/do an alternative implementation of.
2024-10-30 21:36:31 -04:00
Luke Parker
2a427382f1 Natspec, slither Deployer, Router 2024-10-30 21:35:43 -04:00
Luke Parker
e9c1235b76 Tweak how features are activated in the coins pallet tests 2024-10-30 17:15:39 -04:00
akildemir
dc1b8dfccd add coins pallet tests (#606)
* add tests

* remove unused crate

* remove serai_abi
2024-10-30 16:05:56 -04:00
Luke Parker
ce1689b325 Expand tests for ethereum-schnorr-contract 2024-10-28 18:08:31 -04:00
Luke Parker
d0201cf2e5 Remove potentially vartime (due to cache side-channel attacks) table access in dalek-ff-group and minimal-ed448 2024-10-27 08:51:19 -04:00
Luke Parker
f3d20e60b3 Remove --no-deps from docs build to fix linking to deps 2024-10-17 21:14:13 -04:00
Luke Parker
dafba81b40 Add wasm32-unknown-unknown target to docs build 2024-10-17 18:45:34 -04:00
Luke Parker
91f8ec53d9 Add build-dependencies into docs build 2024-10-17 18:29:47 -04:00
Luke Parker
fc9a4a08b8 Correct rust-docs component name 2024-10-17 18:12:35 -04:00
Luke Parker
45fadb21ac Correct paths in pages.yml 2024-10-17 18:05:54 -04:00
Luke Parker
28619fbee1 CI fixes
Mainly corrects for https://github.com/alloy-rs/alloy/issues/1510 yet also
corrects a missing machete ignore.
2024-10-17 18:02:57 -04:00
Luke Parker
bbe014c3a7 Have CI build with doc_auto_cfg 2024-10-17 17:48:14 -04:00
Luke Parker
fb3fadb3d3 Publish Rust docs to GH pages 2024-10-17 17:18:58 -04:00
Luke Parker
f481d20773 Correct licensing for .github 2024-10-17 17:17:36 -04:00
Luke Parker
599b2dec8f cargo update
Should fix the recent CI failures re: Ethereum as well.
2024-10-09 00:39:34 -04:00
akildemir
435f1d9ae1 add specific network/coin/balance types (#619)
* add specific network/coin/balance types

* misc fixes

* fix clippy

* misc fixes

* fix pr comments

* Make halting for external networks

* fix encode/decode
2024-10-06 22:16:11 -04:00
Luke Parker
0b61a75afc Add lint against string slicing
These are tricky as it panics if the slice doesn't hit a UTF-8 codepoint
boundary.
2024-10-02 21:58:48 -04:00
Luke Parker
2aee21e507 Fix decomposition -> divisor points vartime due to branch prediction/cache rules 2024-09-29 04:19:16 -04:00
Luke Parker
d7ecab605e Update docs gems 2024-09-25 10:37:29 -04:00
Luke Parker
b3e003bd5d cargo +nightly fmt 2024-09-25 10:22:49 -04:00
Luke Parker
251a6e96e8 Constant-time divisors (#617)
* WIP constant-time implementation of the ec-divisors library

* Fix misc logic errors in poly.rs

* Remove accidentally committed test statements

* Fix ConstantTimeEq for CoefficientIndex

* Correct the iterations formula

x**3 / (0 y + x**1) would prior be considered indivisible with iterations = 0.
It is divisible however. The amount of iterations should be the amount of
coefficients within the numerator *excluding the coefficient for y**0 x**0*.

* Poly PartialEq, conditional_select_poly which checks poly structure equivalence

If the first passed argument is smaller than the latter, it's padded to the
necessary length.

Also adds code to trim the remainder as the remainder is the value modulo, so
it's very important it remains concise and workable.

* Fix the line function

It selected the case if both were identity before selecting the case if either
were identity, the latter overwriting the former.

* Final fixes re: ct_get

1) Our quotient structure does need to be of size equal to the numerator
   entirely to prevent out-of-bounds reads on it
2) We need to get from yx_coefficients if of length >=, so if the length is 1
   we can read y_pow=1 from it. If y_pow=0, and its length is 0 so it has no
   inner Vecs, we need to fall back with the guard y_pow != 0.

* Add a trim algorithm to lib.rs to prevent Polys from becoming unbearably gigantic

Our Poly algorithm is incredibly leaky. While it presumably should be improved,
we can take advantage of our known structure while constructing divisors (and
the small modulus) to simply trim out the zero coefficients leaked. This
maintains Polys in a manageable size.

* Move constant-time scalar mul gadget divisor creation from dkg to ec-divisors

Anyone creating a divisor for the scalar mul gadget should use constant time
code, so this code should at least be in the EC gadgets crate It's of
non-trivial complexity to deal with otherwise.

* Remove unsafe, cache timing attacks from ec-divisors
2024-09-24 17:27:05 -04:00
Jeffro
805fea52ec Add link for SCALE encoding in doc 2024-09-24 14:17:28 -07:00
j-berman
48db06f901 xmr: fix scan long encrypted amount 2024-09-21 08:33:35 -07:00
Luke Parker
e9d0a5e0ed Remove stray references to monero-wallet-util 2024-09-20 04:28:23 -04:00
Luke Parker
44d05518aa Add a public TransactionKeys struct to monero-wallet
monero-wallet ships an Eventuality, yet it's across the entire transaction. It
can't prove a single output's state with a traditional payment proof. By adding
this new object, another library can obtain the ephemeral randomness used and
do any/every proof they want regarding a transaction's outputs.

Necessary for https://github.com/serai-dex/serai/issues/599.
2024-09-20 04:26:21 -04:00
Luke Parker
23b433fe6c Fix #612 2024-09-20 04:05:17 -04:00
Luke Parker
2e57168a97 Update documentation on Timelocked 2024-09-20 04:01:55 -04:00
Luke Parker
5c6160c398 Kick monero-seed, polyseed, monero-wallet-util to https://github.com/kayabaNerve/monero-wallet-util 2024-09-20 03:24:33 -04:00
Luke Parker
9eee1d971e bitcoin-serai changes from next
Expands the NotEnoughFunds error and enables fetching the entire unsigned
transaction, not just the outputs it'll have.
2024-09-20 03:14:20 -04:00
Luke Parker
e6300847d6 monero-serai changes from 2edc2f3612 2024-09-20 02:42:46 -04:00
Luke Parker
e0a3e7bea6 Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-20 02:40:18 -04:00
Luke Parker
cbebaa1349 Tighten documentation on Block::number 2024-09-20 02:40:01 -04:00
Luke Parker
2c8af04781 machete, drain > mem::swap for clarity reasons 2024-09-19 23:36:32 -07:00
Luke Parker
a0ed043372 Move old processor/src directory to processor/TODO 2024-09-19 23:36:32 -07:00
Luke Parker
2984d2f8cf Misc comments 2024-09-19 23:36:32 -07:00
Luke Parker
554c5778e4 Don't track deployment block in the Router
This technically has a TOCTOU where we sync an Epoch's metadata (signifying we
did sync to that point), then check if the Router was deployed, yet at that
very moment the node resets to genesis. By ensuring the Router is deployed, we
avoid this (and don't need to track the deployment block in-contract).

Also uses a JoinSet to sync the 32 blocks in parallel.
2024-09-19 23:36:32 -07:00
Luke Parker
7e4c59a0a3 Have the Router track its deployment block
Prevents a consensus split where some nodes would drop transfers if their node
didn't think the Router was deployed, and some would handle them.
2024-09-19 23:36:32 -07:00
Luke Parker
294462641e Don't have the ERC20 collapse the top-level transfer ID to the transaction ID
Uses the ID of the transfer event associated with the top-level transfer.
2024-09-19 23:36:32 -07:00
Luke Parker
ae76749513 Transfer ETH with CREATE, not prior to CREATE
Saves a few thousand gas.
2024-09-19 23:36:32 -07:00
Luke Parker
1e1b821d34 Report a Change Output with every Eventuality to ensure we don't fall out of synchrony 2024-09-19 23:36:32 -07:00
Luke Parker
702b4c860c Add dummy fee values to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bc1bbf9951 Set a fixed fee transferred to the caller for publication
Avoids the risk of the gas used by the contract exceeding the gas presumed to
be used (causing an insolvency).
2024-09-19 23:36:32 -07:00
Luke Parker
ec9211fd84 Remove accidentally included bitcoin feature from processor-bin 2024-09-19 23:36:32 -07:00
Luke Parker
4292660eda Have the Ethereum scheduler create Batches as necessary
Also introduces the fee logic, despite it being stubbed.
2024-09-19 23:36:32 -07:00
Luke Parker
8ea5acbacb Update the Router smart contract to pay fees to the caller
The caller is paid a fixed fee per unit of gas spent. That arguably
incentivizes the publisher to raise the gas used by internal calls, yet this
doesn't effect the user UX as they'll have flatly paid the worst-case fee
already. It does pose a risk where callers are arguably incentivized to cause
transaction failures which consume all the gas, not just increased gas, yet:

1) Modern smart contracts don't error by consuming all the gas
2) This is presumably infeasible
3) Even if it was feasible, the gas fees gained presumably exceed the gas fees
   spent causing the failure

The benefit to only paying the callers for the gas used, not the gas alotted,
is it allows Serai to build up a buffer. While this should be minor, a few
cents on every transaction at best, if we ever do have any costs slip through
the cracks, it ideally is sufficient to handle those.
2024-09-19 23:36:32 -07:00
Luke Parker
1b1aa74770 Correct forge fmt config 2024-09-19 23:36:32 -07:00
Luke Parker
861a8352e5 Update to the latest bitcoin-serai 2024-09-19 23:36:32 -07:00
Luke Parker
e64827b6d7 Mark files in TODO/ with "TODO" to ensure it pops up on search 2024-09-19 23:36:32 -07:00
Luke Parker
c27aaf8658 Merge BlockWithAcknowledgedBatch and BatchWithoutAcknowledgeBatch
Offers a simpler API to the coordinator.
2024-09-19 23:36:32 -07:00
Luke Parker
53567e91c8 Read NetworkId from ScannerFeed trait, not env 2024-09-19 23:36:32 -07:00
Luke Parker
1a08d50e16 Remove unused code in the Ethereum processor 2024-09-19 23:36:32 -07:00
Luke Parker
855e53164e Finish Ethereum ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
1367e41510 Add hooks to the main loop
Lets the Ethereum processor track the first key set as soon as it's set.
2024-09-19 23:36:32 -07:00
Luke Parker
a691be21c8 Call tidy_keys upon queue_key
Prevents the potential case of the substrate task and the scan task writing to
the same storage slot at once.
2024-09-19 23:36:32 -07:00
Luke Parker
673cf8fd47 Pass the latest active key to the Block's scan function
Effectively necessary for networks on which we utilize account abstraction in
order to know what key to associate the received coins with.
2024-09-19 23:36:32 -07:00
Luke Parker
118d81bc90 Finish the Ethereum TX publishing code 2024-09-19 23:36:32 -07:00
Luke Parker
e75c4ec6ed Explicitly add an unspendable script path to the processor's generated keys 2024-09-19 23:36:32 -07:00
Luke Parker
9e628d217f cargo fmt, move ScannerFeed from String to the RPC error 2024-09-19 23:36:32 -07:00
Luke Parker
a717ae9ea7 Have the TransactionPublisher build a TxLegacy from Transaction 2024-09-19 23:36:32 -07:00
Luke Parker
98c3f75fa2 Move the Ethereum Action machine to its own file 2024-09-19 23:36:32 -07:00
Luke Parker
18178f3764 Add note on the returned top-level transfers being unordered 2024-09-19 23:36:32 -07:00
Luke Parker
bdc3bda04a Remove ethereum-serai/serai-processor-ethereum-contracts
contracts was smashed out of ethereum-serai. Both have now been smashed into
individual crates.

Creates a TODO directory with left-over test code yet to be moved.
2024-09-19 23:36:32 -07:00
Luke Parker
433beac93a Ethereum SignableTransaction, Eventuality 2024-09-19 23:36:32 -07:00
Luke Parker
8f2a9301cf Don't have the router drop transactions which may have top-level transfers
The router will now match the top-level transfer so it isn't used as the
justification for the InInstruction it's handling. This allows the theoretical
case where a top-level transfer occurs (to any entity) and an internal call
performs a transfer to Serai.

Also uses a JoinSet for fetching transactions' top-level transfers in the ERC20
crate. This does add a dependency on tokio yet improves performance, and it's
scoped under serai-processor (which is always presumed to be tokio-based).
While we could instead import futures for join_all,
https://github.com/smol-rs/futures-lite/issues/6 summarizes why that wouldn't
be a good idea. While we could prefer async-executor over tokio's JoinSet,
JoinSet doesn't share the same issues as FuturesUnordered. That means our
question is solely if we want the async-executor executor or the tokio
executor, when we've already established the Serai processor is always presumed
to be tokio-based.
2024-09-19 23:36:32 -07:00
Luke Parker
d21034c349 Add calls to get the messages to sign for the router 2024-09-19 23:36:32 -07:00
Luke Parker
381495618c Trim dead code 2024-09-19 23:36:32 -07:00
Luke Parker
ee0efe7cde Don't have the Deployer store the deployment block
Also updates how re-entrancy is handled to a more efficient and portable
mechanism.
2024-09-19 23:36:32 -07:00
Luke Parker
7feb7aed22 Hash the message before the challenge function in the Schnorr contract
Slightly more efficient.
2024-09-19 23:36:32 -07:00
Luke Parker
cc75a92641 Smash out the router library 2024-09-19 23:36:32 -07:00
Luke Parker
a7d5640642 Smash ERC20 into its own library 2024-09-19 23:36:32 -07:00
Luke Parker
ae61f3d359 forge fmt 2024-09-19 23:36:32 -07:00
Luke Parker
4bcea31c2a Break Ethereum Deployer into crate 2024-09-19 23:36:32 -07:00
Luke Parker
eb9bce6862 Remove OutInstruction's data field
It makes sense for networks which support arbitrary data to do as part of their
address. This reduces the ability to perform DoSs, achieves better performance,
and better uses the type system (as now networks we don't support data on don't
have a data field).

Updates the Ethereum address definition in serai-client accordingly
2024-09-19 23:36:32 -07:00
Luke Parker
39be23d807 Remove artifacts for serai-processor-ethereum-contracts 2024-09-19 23:36:32 -07:00
Luke Parker
3f0f4d520d Remove the Sandbox contract
If instead of intaking calls, we intake code, we can deploy a fresh contract
which makes arbitrary calls *without* attempting to build our abstraction
layer over the concept.

This should have the same gas costs, as we still have one contract deployment.
The new contract only has a constructor, so it should have no actual code and
beat the Sandbox in that regard? We do have to call into ourselves to meter the
gas, yet we already had to call into the deployed Sandbox to achieve that.

Also re-defines the OutInstruction to include tokens, implements
OutInstruction-specified gas amounts, bumps the Solidity version, and other
such misc changes.
2024-09-19 23:36:32 -07:00
Luke Parker
80ca2b780a Add tests for the premise of the Schnorr contract to the Schnorr crate 2024-09-19 23:36:32 -07:00
Luke Parker
0813351f1f OUT_DIR > artifacts 2024-09-19 23:36:32 -07:00
Luke Parker
a38d135059 rust-toolchain 1.81 2024-09-19 23:36:32 -07:00
Luke Parker
67f9f76fdf Remove publish = false 2024-09-19 23:36:32 -07:00
Luke Parker
1c5bc2259e Dedicated crate for the Schnorr contract 2024-09-19 23:36:32 -07:00
Luke Parker
bdf89f5350 Add dedicated crate for building Solidity contracts 2024-09-19 23:36:32 -07:00
Luke Parker
239127aae5 Add crate for the Ethereum contracts 2024-09-19 23:36:32 -07:00
Luke Parker
d9543bee40 Move ethereum-serai under the processor
It isn't generally usable and should be directly integrated at this point.
2024-09-19 23:36:32 -07:00
Luke Parker
8746b54a43 Don't use a different address for DAI in test
anvil will let us deploy to the existing address.
2024-09-19 23:36:32 -07:00
Luke Parker
7761798a78 Outline the Ethereum processor
This was only half-finished to begin with, unfortunately...
2024-09-19 23:36:32 -07:00
Luke Parker
72a18bf8bb Smart Contract Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0616085109 Monero Planner
Finishes the Monero processor.
2024-09-19 23:36:32 -07:00
Luke Parker
e23176deeb Change dummy payment ID behavior on 2-output, no change
This reduces the ability to fingerprint from any observer of the blockchain to
just one of the two recipients.
2024-09-19 23:36:32 -07:00
Luke Parker
5551521e58 Tighten documentation on Block::number 2024-09-19 23:36:32 -07:00
Luke Parker
a2d9aeaed7 Stub out Scheduler in the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
e1ad897f7e Allow scheduler's creation of transactions to be async and error
I don't love this, but it's the only way to select decoys without using a local
database. While the prior commit added such a databse, the performance of it
presumably wasn't viable, and while TODOs marked the needed improvements, it
was still messy with an immense scope re: any auditing.

The relevant scheduler functions now take `&self` (intentional, as all
mutations should be via the `&mut impl DbTxn` passed). The calls to `&self` are
expected to be completely deterministic (as usual).
2024-09-19 23:36:32 -07:00
Luke Parker
2edc2f3612 Add a database of all Monero outs into the processor
Enables synchronous transaction creation (which requires synchronous decoy
selection).
2024-09-19 23:36:32 -07:00
Luke Parker
e56af7fc51 Monero time_for_block, dust 2024-09-19 23:36:32 -07:00
Luke Parker
947e1067d9 Monero Processor scan, check_for_eventuality_resolutions 2024-09-19 23:36:32 -07:00
Luke Parker
b4e94f3d51 cargo fmt signers/scanner 2024-09-19 23:36:32 -07:00
Luke Parker
1b39138472 Define subaddress indexes to use
(1, 0) is the external address. (2, *) are the internal addresses.
2024-09-19 23:36:32 -07:00
Luke Parker
e78236276a Remove async-trait from processor/
Part of https://github.com/serai-dex/issues/607.
2024-09-19 23:36:32 -07:00
Luke Parker
2c4c33e632 Misc continuances on the Monero processor 2024-09-19 23:36:32 -07:00
Luke Parker
02409c5735 Correct Multisig Rotation to use WINDOW_LENGTH where proper 2024-09-19 23:36:32 -07:00
Luke Parker
f2cf03cedf Monero processor primitives 2024-09-19 23:36:32 -07:00
Luke Parker
0d4c8cf032 Use a local DB channel for sending to the message-queue
The provided message-queue queue functions runs unti it succeeds. This means
sending to the message-queue will no longer potentially block for arbitrary
amount of times as sending messages is just writing them to a DB.
2024-09-19 23:36:32 -07:00
Luke Parker
b6811f9015 serai-processor-bin
Moves the coordinator loop out of serai-bitcoin-processor, completing it.

Fixes a potential race condition in the message-queue regarding multiple
sockets sending messages at once.
2024-09-19 23:36:32 -07:00
Luke Parker
fcd5fb85df Add binary search to find the block to start scanning from 2024-09-19 23:36:32 -07:00
Luke Parker
3ac0265f07 Add section documenting the safety of txindex upon reorganizations 2024-09-19 23:36:32 -07:00
Luke Parker
9b8c8f8231 Misc tidying of serai-db calls 2024-09-19 23:36:32 -07:00
Luke Parker
59fa49f750 Continue filling out main loop
Adds generics to the db_channel macro, fixes the bug where it needed at least
one key.
2024-09-19 23:36:32 -07:00
Luke Parker
723f529659 Note better message structure in messages 2024-09-19 23:36:32 -07:00
Luke Parker
73af09effb Add note to signers on reducing disk IO 2024-09-19 23:36:32 -07:00
Luke Parker
4054e44471 Start on the new processor main loop 2024-09-19 23:36:32 -07:00
Luke Parker
a8159e9070 Bitcoin Key Gen 2024-09-19 23:36:32 -07:00
Luke Parker
b61ba9d1bb Adjust Bitcoin processor layout 2024-09-19 23:36:32 -07:00
Luke Parker
776cbbb9a4 Misc changes in response to prior two commits 2024-09-19 23:36:32 -07:00
Luke Parker
76a3f3ec4b Add an anyone-can-pay output to every Bitcoin transaction
Resolves #284.
2024-09-19 23:36:32 -07:00
Luke Parker
93c7d06684 Implement presumed_origin
Before we yield a block for scanning, we save all of the contained script
public keys. Then, when we want the address credited for creating an output,
we read the script public key of the spent output from the database.

Fixes #559.
2024-09-19 23:36:32 -07:00
Luke Parker
4cb838e248 Bitcoin processor lib.rs -> main.rs 2024-09-19 23:36:32 -07:00
Luke Parker
c988b7cdb0 Bitcoin TransactionPublisher 2024-09-19 23:36:32 -07:00
Luke Parker
017aab2258 Satisfy Scheduler for Bitcoin 2024-09-19 23:36:32 -07:00
Luke Parker
ba3a6f9e91 Bitcoin ScannerFeed 2024-09-19 23:36:32 -07:00
Luke Parker
e36b671f37 Remove bound that WINDOW_LENGTH < CONFIRMATIONS
It's unnecessary and not valuable.
2024-09-19 23:36:32 -07:00
Luke Parker
2d4b775b6e Add bitcoin Block trait impl 2024-09-19 23:36:32 -07:00
Luke Parker
247cc8f0cc Bitcoin Output/Transaction definitions 2024-09-19 23:36:32 -07:00
Luke Parker
0ccf71df1e Remove old signer impls 2024-09-19 23:36:32 -07:00
Luke Parker
8aba71b9c4 Add CosignerTask to signers, completing it 2024-09-19 23:36:32 -07:00
Luke Parker
46c12c0e66 SlashReport signing and signature publication 2024-09-19 23:36:32 -07:00
Luke Parker
3cc7b49492 Strongly type SlashReport, populate cosign/slash report tasks with work 2024-09-19 23:36:32 -07:00
Luke Parker
0078858c1c Tidy messages, publish all Batches to the coordinator
Prior, we published SignedBatches, yet Batches are necessary for auditing
purposes.
2024-09-19 23:36:32 -07:00
Luke Parker
a3cb514400 Have the coordinator task publish Batches 2024-09-19 23:36:32 -07:00
Luke Parker
ed0221d804 Add BatchSignerTask
Uses a wrapper around AlgorithmMachine Schnorrkel to let the message be &[].
2024-09-19 23:36:32 -07:00
Luke Parker
4152bcacb2 Replace scanner's BatchPublisher with a pair of DB channels 2024-09-19 23:36:32 -07:00
Luke Parker
f07ec7bee0 Route the coordinator, fix race conditions in the signers library 2024-09-19 23:36:32 -07:00
Luke Parker
7484eadbbb Expand task management
These extensions are necessary for the signers task management.
2024-09-19 23:36:32 -07:00
Luke Parker
59ff944152 Work on the higher-level signers API 2024-09-19 23:36:32 -07:00
Luke Parker
8f848b1abc Tidy transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
100c80be9f Finish transaction signing task with TX rebroadcast code 2024-09-19 23:36:32 -07:00
Luke Parker
a353f9e2da Further work on transaction signing 2024-09-19 23:36:32 -07:00
Luke Parker
b62fc3a1fa Minor work on the transaction signing task 2024-09-19 23:36:32 -07:00
Luke Parker
8380653855 Add empty serai-processor-signers library
This will replace the signers still in the monolithic Processor binary.
2024-09-19 23:36:32 -07:00
Luke Parker
b50b889918 Split processor into bitcoin-processor, ethereum-processor, monero-processor 2024-09-19 23:36:32 -07:00
Luke Parker
d570c1d277 Move additional_key.rs to serai-processor-view-keys
I don't love this. I wanted to simply add this function to `processor/key-gen`,
but then anyone who wants a view key needs to pull in Bulletproofs which is a
mess of code. They'd also be subject to an AGPL licensed library.

This is so small it should be a primitive elsewhere, yet there is no primitives
library eligible. Maybe serai-client since that has the code to make
transactions to Serai (and will have this as a dependency)? Except then the
processor has to import serai-client when this rewrite removed it as a
dependency.
2024-09-19 23:36:32 -07:00
Luke Parker
2da24506a2 Remove vast swaths of legacy code in the processor 2024-09-19 23:36:32 -07:00
Luke Parker
6e9cb74022 Add non-transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0c1aec29bb Finish routing output flushing
Completes the transaction-chaining scheduler.
2024-09-19 23:36:32 -07:00
Luke Parker
653ead1e8c Finish the tree logic in the transaction-chaining scheduler
Also completes the DB functions, makes Scheduler never instantiated, and
ensures tree roots have change outputs.
2024-09-19 23:36:32 -07:00
Luke Parker
8ff019265f Near-complete version of the tree algorithm in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
0601d47789 Work on the tree logic in the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
ebef38d93b Ensure the transaction-chaining scheduler doesn't accumulate the same output multiple times 2024-09-19 23:36:32 -07:00
Luke Parker
75b4707002 Add input aggregation in the transaction-chaining scheduler
Also handles some other misc in it.
2024-09-19 23:36:32 -07:00
Luke Parker
3c787e005f Fix bug in the scanner regarding forwarded output amounts
We'd report the amount originally received, minus 2x the cost to aggregate,
regardless the amount successfully forwarded. We should've reduced to the
amount successfully forwarded, if it was smaller, in case the cost to
forward exceeded the aggregation cost.
2024-09-19 23:36:32 -07:00
Luke Parker
f11a6b4ff1 Better document the forwarded output flow 2024-09-19 23:36:32 -07:00
Luke Parker
fadc88d2ad Add scheduler-primitives
The main benefit is whatever scheduler is in use, we now have a single API to
receive TXs to sign (which is of value to the TX signer crate we'll inevitably
build).
2024-09-19 23:36:32 -07:00
Luke Parker
c88ebe985e Outline of the transaction-chaining scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
6deb60513c Expand primitives/scanner with niceties needed for the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
bd277e7032 Add processor/scheduler/utxo/primitives
Includes the necessary signing functions and the fee amortization logic.

Moves transaction-chaining to utxo/transaction-chaining.
2024-09-19 23:36:32 -07:00
Luke Parker
fc765bb9e0 Add crate for the transaction-chaining Scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
13b74195f7 Don't have acknowledge_batch immediately run
`acknowledge_batch` can only be run if we know what the Batch should be. If we
don't know what the Batch should be, we have to block until we do.
Specifically, we need the block number associated with the Batch.

Instead of blocking over the Scanner API, the Scanner API now solely queues
actions. A new task intakes those actions once we can. This ensures we can
intake the entire Substrate chain, even if our daemon for the external network
is stalled at its genesis block.

All of this for the block number alone seems ridiculous. To go from the block
hash in the Batch to the block number without this task, we'd at least need the
index task to be up to date (still requiring blocking or an API returning
ephemeral errors).
2024-09-19 23:36:32 -07:00
Luke Parker
f21838e0d5 Replace acknowledge_block with acknowledge_batch 2024-09-19 23:36:32 -07:00
Luke Parker
76cbe6cf1e Have acknowledge_block take in the results of the InInstructions executed
If any failed, the scanner now creates a Burn for the return.
2024-09-19 23:36:32 -07:00
Luke Parker
5999f5d65a Route the DB w.r.t. forwarded outputs' information 2024-09-19 23:36:32 -07:00
Luke Parker
d429a0bae6 Remove unused ID -> number lookup 2024-09-19 23:36:32 -07:00
Luke Parker
775824f373 Impl ScanData serialization in the DB 2024-09-19 23:36:32 -07:00
Luke Parker
41a74cb513 Check a queued key has never been queued before
Re-queueing should only happen with a malicious supermajority and breaks
indexing by the key.
2024-09-19 23:36:32 -07:00
Luke Parker
e26da1ec34 Have the Eventuality task drop outputs which aren't ours and aren't worth it to aggregate
We could drop these entirely, yet there's some degree of utility to be able to
add coins to Serai in this manner.
2024-09-19 23:36:32 -07:00
Luke Parker
7266e7f7ea Add note on why LifetimeStage is monotonic 2024-09-19 23:36:32 -07:00
Luke Parker
a8b9b7bad3 Add sanity checks we haven't prior reported an InInstruction for/accumulated an output 2024-09-19 23:36:32 -07:00
Luke Parker
2ca7fccb08 Pass the lifetime information to the scheduler
Enables it to decide which keys to use for fulfillment/change.
2024-09-19 23:36:32 -07:00
Luke Parker
4f6d91037e Call flush_key 2024-09-19 23:36:32 -07:00
Luke Parker
8db76ed67c Add key management to the scheduler 2024-09-19 23:36:32 -07:00
Luke Parker
920303e1b4 Add helper to intake Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
9f4b28e5ae Clarify output-to-self to output-to-Serai
There's only the requirement it's to an active key which is being reported for.
2024-09-19 23:36:32 -07:00
Luke Parker
f9d02d43c2 Route burns through the scanner 2024-09-19 23:36:32 -07:00
Luke Parker
8ac501028d Add API to publish Batches with
This doesn't have to be abstract, we can generate the message and use the
message-queue API, yet this should help with testing.
2024-09-19 23:36:32 -07:00
Luke Parker
612c67c537 Cache the cost to aggregate 2024-09-19 23:36:32 -07:00
Luke Parker
04a971a024 Fill in various DB functions 2024-09-19 23:36:32 -07:00
Luke Parker
738636c238 Have Scanner::new spawn tasks 2024-09-19 23:36:32 -07:00
Luke Parker
65f3f48517 Add ReportDb 2024-09-19 23:36:32 -07:00
Luke Parker
7cc07d64d1 Make report.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
fdfe520f9d Add ScanDb 2024-09-19 23:36:32 -07:00
Luke Parker
77ef25416b Make scan.rs a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7c1025dbcb Implement key retiry 2024-09-19 23:36:32 -07:00
Luke Parker
a771fbe1c6 Logs, documentation, misc 2024-09-19 23:36:32 -07:00
Luke Parker
9cebdf7c68 Add sorts for safety even upon non-determinism 2024-09-19 23:36:32 -07:00
Luke Parker
75251f04b4 Use a channel for the InInstructions
It's still unclear how we'll handle refunding failed InInstructions at this
time. Presumably, extending the InInstruction channel with the associated
output ID?
2024-09-19 23:36:32 -07:00
Luke Parker
6196642beb Add a DbChannel between scan and eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
2bddf00222 Don't expose IndexDb throughout the crate 2024-09-19 23:36:32 -07:00
Luke Parker
9ab8ba0215 Add dedicated Eventuality DB and stub missing fns 2024-09-19 23:36:32 -07:00
Luke Parker
33e0c85f34 Make Eventuality a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
1e8f4e6156 Make a dedicated IndexDb 2024-09-19 23:36:32 -07:00
Luke Parker
66f3428051 Make index a folder, not a file 2024-09-19 23:36:32 -07:00
Luke Parker
7e71840822 Add helper methods
Has fetched blocks checked to be the indexed blocks. Has scanned outputs be
sorted, meaning they aren't subject to implicit order/may be non-deterministic
(such as if handled by a threadpool).
2024-09-19 23:36:32 -07:00
Luke Parker
b65dbacd6a Move ContinuallyRan into primitives
I'm unsure where else it'll be used within the processor, yet it's generally
useful and I don't want to make a dedicated crate yet.
2024-09-19 23:36:32 -07:00
Luke Parker
2fcd9530dd Add a callback to accumulate outputs and return the new Eventualities 2024-09-19 23:36:32 -07:00
Luke Parker
379780a3c9 Flesh out eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
945f31dfc7 Have the scan flag blocks with change/branch/forwarded as notable 2024-09-19 23:36:32 -07:00
Luke Parker
d5d1fc3eea Flesh out report task 2024-09-19 23:36:32 -07:00
Luke Parker
fd12cc0213 Finish scan task 2024-09-19 23:36:32 -07:00
Luke Parker
ce805c8cc8 Correct compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
bc0cc5a754 Decide flow between scan/eventuality/report
Scan now only handles External outputs, with an associated essay going over
why. Scan directly creates the InInstruction (prior planned to be done in
Report), and Eventuality is declared to end up yielding the outputs.

That will require making the Eventuality flow two-stage. One stage to evaluate
existing Eventualities and yield outputs, and one stage to incorporate new
Eventualities before advancing the scan window.
2024-09-19 23:36:32 -07:00
Luke Parker
f2ee4daf43 Add Eventuality back to processor primitives
Also splits crate into modules.
2024-09-19 23:36:32 -07:00
Luke Parker
4e29678799 Add bounds for the eventuality task 2024-09-19 23:36:32 -07:00
Luke Parker
74d3075dae Document expectations on Eventuality task and correct code determining the block safe to scan/report 2024-09-19 23:36:32 -07:00
Luke Parker
155ad48f4c Handle dust 2024-09-19 23:36:32 -07:00
Luke Parker
951872b026 Differentiate BlockHeader from Block 2024-09-19 23:36:32 -07:00
Luke Parker
2b47feafed Correct misc compilation errors 2024-09-19 23:36:32 -07:00
Luke Parker
a2717d73f0 Flesh out new scanner a bit more
Adds the task to mark blocks safe to scan, and outlines the task to report
blocks.
2024-09-19 23:36:32 -07:00
Luke Parker
8763ef23ed Definition and delineation of tasks within the scanner
Also defines primitives for the processor.
2024-09-19 23:36:32 -07:00
Luke Parker
57a0ba966b Extend serai-db with support for generic keys/values 2024-09-19 23:36:32 -07:00
Luke Parker
e843b4a2a0 Move scanner.rs to scanner/lib.rs 2024-09-19 23:36:32 -07:00
Luke Parker
2f3bd7a02a Cleanup DB handling a bit in key-gen/attempt-manager 2024-09-19 23:36:32 -07:00
Luke Parker
1e8a9ec5bd Smash out the signer
Abstract, to be done for the transactions, the batches, the cosigns, the slash
reports, everything. It has a minimal API itself, intending to be as clear as
possible.
2024-09-19 23:36:32 -07:00
Luke Parker
2f29c91d30 Smash key-gen out of processor
Resolves some bad assumptions made regarding keys being unique or not.
2024-09-19 23:36:32 -07:00
Luke Parker
f3b91bd44f Smash key-gen into independent crate 2024-09-19 23:36:32 -07:00
Luke Parker
e4e4245ee3 One Round DKG (#589)
* Upstream GBP, divisor, circuit abstraction, and EC gadgets from FCMP++

* Initial eVRF implementation

Not quite done yet. It needs to communicate the resulting points and proofs to
extract them from the Pedersen Commitments in order to return those, and then
be tested.

* Add the openings of the PCs to the eVRF as necessary

* Add implementation of secq256k1

* Make DKG Encryption a bit more flexible

No longer requires the use of an EncryptionKeyMessage, and allows pre-defined
keys for encryption.

* Make NUM_BITS an argument for the field macro

* Have the eVRF take a Zeroizing private key

* Initial eVRF-based DKG

* Add embedwards25519 curve

* Inline the eVRF into the DKG library

Due to how we're handling share encryption, we'd either need two circuits or to
dedicate this circuit to the DKG. The latter makes sense at this time.

* Add documentation to the eVRF-based DKG

* Add paragraph claiming robustness

* Update to the new eVRF proof

* Finish routing the eVRF functionality

Still needs errors and serialization, along with a few other TODOs.

* Add initial eVRF DKG test

* Improve eVRF DKG

Updates how we calculcate verification shares, improves performance when
extracting multiple sets of keys, and adds more to the test for it.

* Start using a proper error for the eVRF DKG

* Resolve various TODOs

Supports recovering multiple key shares from the eVRF DKG.

Inlines two loops to save 2**16 iterations.

Adds support for creating a constant time representation of scalars < NUM_BITS.

* Ban zero ECDH keys, document non-zero requirements

* Implement eVRF traits, all the way up to the DKG, for secp256k1/ed25519

* Add Ristretto eVRF trait impls

* Support participating multiple times in the eVRF DKG

* Only participate once per key, not once per key share

* Rewrite processor key-gen around the eVRF DKG

Still a WIP.

* Finish routing the new key gen in the processor

Doesn't touch the tests, coordinator, nor Substrate yet.
`cargo +nightly fmt && cargo +nightly-2024-07-01 clippy --all-features -p serai-processor`
does pass.

* Deduplicate and better document in processor key_gen

* Update serai-processor tests to the new key gen

* Correct amount of yx coefficients, get processor key gen test to pass

* Add embedded elliptic curve keys to Substrate

* Update processor key gen tests to the eVRF DKG

* Have set_keys take signature_participants, not removed_participants

Now no one is removed from the DKG. Only `t` people publish the key however.

Uses a BitVec for an efficient encoding of the participants.

* Update the coordinator binary for the new DKG

This does not yet update any tests.

* Add sensible Debug to key_gen::[Processor, Coordinator]Message

* Have the DKG explicitly declare how to interpolate its shares

Removes the hack for MuSig where we multiply keys by the inverse of their
lagrange interpolation factor.

* Replace Interpolation::None with Interpolation::Constant

Allows the MuSig DKG to keep the secret share as the original private key,
enabling deriving FROST nonces consistently regardless of the MuSig context.

* Get coordinator tests to pass

* Update spec to the new DKG

* Get clippy to pass across the repo

* cargo machete

* Add an extra sleep to ensure expected ordering of `Participation`s

* Update orchestration

* Remove bad panic in coordinator

It expected ConfirmationShare to be n-of-n, not t-of-n.

* Improve documentation on  functions

* Update TX size limit

We now no longer have to support the ridiculous case of having 49 DKG
participations within a 101-of-150 DKG. It does remain quite high due to
needing to _sign_ so many times. It'd may be optimal for parties with multiple
key shares to independently send their preprocesses/shares (despite the
overhead that'll cause with signatures and the transaction structure).

* Correct error in the Processor spec document

* Update a few comments in the validator-sets pallet

* Send/Recv Participation one at a time

Sending all, then attempting to receive all in an expected order, wasn't working
even with notable delays between sending messages. This points to the mempool
not working as expected...

* Correct ThresholdKeys serialization in modular-frost test

* Updating existing TX size limit test for the new DKG parameters

* Increase time allowed for the DKG on the GH CI

* Correct construction of signature_participants in serai-client tests

Fault identified by akil.

* Further contextualize DkgConfirmer by ValidatorSet

Caught by a safety check we wouldn't reuse preprocesses across messages. That
raises the question of we were prior reusing preprocesses (reusing keys)?
Except that'd have caused a variety of signing failures (suggesting we had some
staggered timing avoiding it in practice but yes, this was possible in theory).

* Add necessary calls to set_embedded_elliptic_curve_key in coordinator set rotation tests

* Correct shimmed setting of a secq256k1 key

* cargo fmt

* Don't use `[0; 32]` for the embedded keys in the coordinator rotation test

The key_gen function expects the random values already decided.

* Big-endian secq256k1 scalars

Also restores the prior, safer, Encryption::register function.
2024-09-19 21:43:26 -04:00
Luke Parker
669b2fef72 Remove test_tweak_keys
What it tests no longer applies since tweak_keys now introduces an unspendable
script path.
2024-09-19 21:43:00 -04:00
Luke Parker
3af430d8de Use the IETF transacript in bitcoin-serai, not RecommendedTranscript
This is more likely to be interoperable in the long term.
2024-09-19 21:13:08 -04:00
Luke Parker
dfb5a053ae Resolve #611 2024-09-19 20:58:33 -04:00
Luke Parker
bdcc061bb4 Add ScannableBlock abstraction in the RPC
Makes scanning synchronous and only error upon a malicious node/unplanned for
hard fork.
2024-09-13 04:38:49 -04:00
Luke Parker
2c7148d636 Add machete exception for monero-clsag to monero-wallet 2024-09-13 02:39:43 -04:00
Luke Parker
6b270bc6aa Remove async-trait from monero-rpc 2024-09-13 02:36:53 -04:00
Luke Parker
875c669a7a Remove monero-serai multisig for just monero-[clsag, wallet] multisig 2024-09-12 18:41:35 -04:00
Luke Parker
0d399ecb28 Remove unused error in monero-address 2024-09-12 18:41:35 -04:00
Luke Parker
88440807e1 Monero v0.18.3.4 (#605)
* Monero v0.18.3.4

* Correct `check_weight_and_fee` call

* Restore empty test files so CI isn't borked
2024-09-06 01:43:31 -04:00
Luke Parker
c1a9256cc5 dockertest 0.5, correct errors from prior update commit 2024-09-05 23:31:45 -04:00
Luke Parker
0d5756ffcf cargo update, upgrade alloy
Removes a dated proc-macro-crate patch.
2024-09-05 17:03:23 -04:00
Luke Parker
ac7b98daac Remove tokio dependency from tendermint-machine
Indirects it via a minimal wrapper which can be trivially patched.
2024-09-05 16:30:27 -04:00
Luke Parker
efc7d70ab1 Clarify when wallet2 will decrypt payment IDs with citations 2024-09-05 15:50:36 -04:00
Luke Parker
4e834873d3 Lints from latest nightly
We can't adopt it due to some issue with building the runtime, but these are
good to have.
2024-09-01 16:33:44 -04:00
akildemir
a506d74d69 move economic security into it's own pallet (#596)
* move economic security into it's own pallet

* fix deny

* Update Cargo.toml, .github for the new crates

* Remove unused import

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-31 18:55:42 -04:00
Boog900
394db44b30 Monero: fix signature hash for V1 txs (#598)
* fix signature hash for V1 txs

* fix CI
2024-08-23 20:34:54 -04:00
akildemir
a2df54dd6a merge genesis complete block with genesis ended 2024-08-15 08:15:40 -07:00
akildemir
efc45c391b update emissions pallet author email 2024-08-15 08:12:47 -07:00
akildemir
cccc1fc7e6 Implement block emissions (#551)
* add genesis liquidity implementation

* add missing deposit event

* fix CI issues

* minor fixes

* make math safer

* fix fmt

* implement block emissions

* make remove liquidity an authorized call

* implement setting initial values for coins

* add genesis liquidity test & misc fixes

* updato develop latest

* fix rotation test

* fix licencing

* add fast-epoch feature

* only create the pool when adding liquidity first time

* add initial reward era test

* test whole pre ec security emissions

* fix clippy

* add swap-to-staked-sri feature

* rebase changes

* fix tests

* Remove accidentally commited ETH ABI files

* fix some pr comments

* Finish up fixing pr comments

* exclude SRI from is_allowed check

* Misc changes

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-08-14 23:12:04 -04:00
akildemir
bf1c493d9a add missing prevotes (#590)
* add missing prevotes

* remove the TODO

* add missing current step checks

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
2024-08-14 15:00:48 -04:00
Luke Parker
3de1e4dee2 Remove stray file in docs/ 2024-08-05 06:52:15 -04:00
Luke Parker
2591b5ade9 Update Gemfile.lock to silence a rexml disclosure 2024-08-03 02:57:56 -04:00
akildemir
e6620963c7 update author email 2024-08-02 01:50:33 -07:00
Luke Parker
d5205ce231 Update dependencies
Resolves a yanked version of bytemuck.
2024-08-01 04:06:09 -04:00
Luke Parker
0f6878567f Remove a pair of unused structs/deps
Caught by the most recent nightly.
2024-08-01 01:36:10 -04:00
Luke Parker
880565cb81 Rust 1.80
Preserves the fn accessors within the Monero crates so that we can use statics
in some cfgs yet not all (in order to provide support for more low-memory
devices) with the exception of `H` (which truly should be cached).
2024-07-26 19:28:10 -07:00
Luke Parker
6f34c2ff77 Remove unused git allowance for monero-rs 2024-07-19 23:51:05 -04:00
akildemir
1493f49416 Implement genesis liquidity protocol (#545)
* add genesis liquidity implementation

* add missing deposit event

* fix CI issues

* minor fixes

* make math safer

* fix fmt

* make remove liquidity an authorized call

* implement setting initial values for coins

* add genesis liquidity test & misc fixes

* updato develop latest

* fix rotation test

* Finish merging develop

* Remove accidentally committed ETH files

* fix pr comments

* further bug fixes

* fix last pr comments

* tidy up

* Misc

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-07-18 19:30:19 -04:00
Luke Parker
2ccb0cd90d Correct version of ruby update is run with
Hopefully finally resolves the site build failures.
2024-07-18 16:47:59 -04:00
Luke Parker
b33a6487aa Rename DKG specified in FROST from FROST to PedPoP 2024-07-18 16:41:31 -04:00
Luke Parker
491500057b Update Ruby version used in GH workflow 2024-07-18 16:09:01 -04:00
Luke Parker
d9f85fab26 Update lockfiles
Resolves a dependabot alert about the Ruby used to generate the docs site.
2024-07-18 15:18:08 -04:00
Luke Parker
7d2d739042 Rename the coins folder to networks (#583)
* Rename the coins folder to networks

Ethereum isn't a coin. It's a network.

Resolves #357.

* More renames of coins -> networks in orchestration

* Correct paths in tests/

* cargo fmt
2024-07-18 15:16:45 -04:00
akildemir
40cc180853 add transaction and crypto unit tests 2024-07-17 16:26:31 -07:00
Luke Parker
2aac6f6998 Improve usage of constants in coordinator p2p 2024-07-17 06:54:54 -04:00
Luke Parker
149c2a4437 Use non-pruned nodes in verify-chain 2024-07-17 06:54:26 -04:00
Luke Parker
e772b8a5f7 #560 take two, now that #560 has been reverted (#561)
* Clear upons upon round, not block

* Cache the proposal for a round

* Rebase onto develop, which reverted this PR, and re-apply this PR

* Set participation upon participation instead of constantly recalculating

* Cache message instances

* Add missing txn commit

Identified by @akildemir.

* Correct clippy lint identified upon rebase

* Fix tendermint chain sync (#581)

* fix p2p Reqres protocol

* stabilize tributary chain sync

* fix pr comments

---------

Co-authored-by: akildemir <34187742+akildemir@users.noreply.github.com>
2024-07-16 19:42:15 -04:00
Luke Parker
c0200df75a Add missing feature flag to dalek-ff-group 2024-07-15 21:50:43 -04:00
Luke Parker
9955ef54a5 Apply bitcoin fee per vsize, not per weight unit
This enables more precision.
2024-07-15 17:37:04 -07:00
Luke Parker
8e7e61adbd Respect maximum amount of outs per request 2024-07-14 20:28:10 -04:00
Luke Parker
0cb24dde02 cargo update
Resolves failing deny.
2024-07-14 20:27:36 -04:00
Luke Parker
97bfb183e8 Correct typo in coordinator
Identified by akil a while ago.
2024-07-14 19:35:45 -04:00
Luke Parker
85fc31fd82 Have monero-wallet use Transaction<Pruned>, not Transaction 2024-07-14 19:30:50 -04:00
Luke Parker
7b8bcae396 Add support for pruned transactions to monero-serai 2024-07-13 00:29:02 -04:00
Luke Parker
70fe52437c Have RPC tests run sequentially
Also corrects links pointing to branches to point to commits.
2024-07-12 22:09:46 -04:00
Luke Parker
ba657e23d1 Have a public monero-rpc type be properly formatted
It was public as the raw RPC response. It's more polite to handle the
formatting in the RPC, and allows us to return a better structure.
2024-07-12 04:14:05 -04:00
Luke Parker
32c24917c4 Correct tests which should've failed to expect failures now that they fail 2024-07-12 03:09:48 -04:00
Luke Parker
4ba961b2cb Cite source for obscure wallet protocol rules 2024-07-12 02:19:21 -04:00
Luke Parker
c59be46e2f Optimize Monero BPs 2024-07-12 02:18:57 -04:00
Luke Parker
2c165e19ae Bitcoin 27.1 2024-07-12 02:18:43 -04:00
Luke Parker
ee10692b23 Fix handling of output distribution
We prior didn't handle how the output distribution only starts after a specific
block.
2024-07-11 18:06:51 -04:00
Luke Parker
7a68b065e0 Redo the Bulletproofs impl
Uses the IP-impl from the FCMP++ work.
2024-07-10 21:05:23 -04:00
Luke Parker
3ddf1eec0c Fix no-std builds for monero-wallet 2024-07-09 02:17:57 -04:00
Luke Parker
84f0e6c26e Add additional documentation 2024-07-08 20:33:00 -04:00
Luke Parker
5bb3256d1f Support subaddresses as change outputs 2024-07-08 20:00:21 -04:00
Luke Parker
774424b70b Differentiate Rpc from DecoyRpc
Enables using a locally backed decoy DB.
2024-07-08 18:14:56 -04:00
Luke Parker
ed662568e2 Clean decoy selection code 2024-07-08 02:51:06 -04:00
Luke Parker
b744ac9a76 Clean decoy selection 2024-07-08 02:38:01 -04:00
Luke Parker
d7f7f69738 Remove the DecoySelection trait 2024-07-08 00:30:42 -04:00
Luke Parker
a2c3aba82b Clean the Monero lib for auditing (#577)
* Remove unsafe creation of dalek_ff_group::EdwardsPoint in BP+

* Rename Bulletproofs to Bulletproof, since they are a single Bulletproof

Also bifurcates prove with prove_plus, and adds a few documentation items.

* Make CLSAG signing private

Also adds a bit more documentation and does a bit more tidying.

* Remove the distribution cache

It's a notable bandwidth/performance improvement, yet it's not ready. We need a
dedicated Distribution struct which is managed by the wallet and passed in.
While we can do that now, it's not currently worth the effort.

* Tidy Borromean/MLSAG a tad

* Remove experimental feature from monero-serai

* Move amount_decryption into EncryptedAmount::decrypt

* Various RingCT doc comments

* Begin crate smashing

* Further documentation, start shoring up API boundaries of existing crates

* Document and clean clsag

* Add a dedicated send/recv CLSAG mask struct

Abstracts the types used internally.

Also moves the tests from monero-serai to monero-clsag.

* Smash out monero-bulletproofs

Removes usage of dalek-ff-group/multiexp for curve25519-dalek.

Makes compiling in the generators an optional feature.

Adds a structured batch verifier which should be notably more performant.

Documentation and clean up still necessary.

* Correct no-std builds for monero-clsag and monero-bulletproofs

* Tidy and document monero-bulletproofs

I still don't like the impl of the original Bulletproofs...

* Error if missing documentation

* Smash out MLSAG

* Smash out Borromean

* Tidy up monero-serai as a meta crate

* Smash out RPC, wallet

* Document the RPC

* Improve docs a bit

* Move Protocol to monero-wallet

* Incomplete work on using Option to remove panic cases

* Finish documenting monero-serai

* Remove TODO on reading pseudo_outs for AggregateMlsagBorromean

* Only read transactions with one Input::Gen or all Input::ToKey

Also adds a helper to fetch a transaction's prefix.

* Smash out polyseed

* Smash out seed

* Get the repo to compile again

* Smash out Monero addresses

* Document cargo features

Credit to @hinto-janai for adding such sections to their work on documenting
monero-serai in #568.

* Fix deserializing v2 miner transactions

* Rewrite monero-wallet's send code

I have yet to redo the multisig code and the builder. This should be much
cleaner, albeit slower due to redoing work.

This compiles with clippy --all-features. I have to finish the multisig/builder
for --all-targets to work (and start updating the rest of Serai).

* Add SignableTransaction Read/Write

* Restore Monero multisig TX code

* Correct invalid RPC type def in monero-rpc

* Update monero-wallet tests to compile

Some are _consistently_ failing due to the inputs we attempt to spend being too
young. I'm unsure what's up with that. Most seem to pass _consistently_,
implying it's not a random issue yet some configuration/env aspect.

* Clean and document monero-address

* Sync rest of repo with monero-serai changes

* Represent height/block number as a u32

* Diversify ViewPair/Scanner into ViewPair/GuaranteedViewPair and Scanner/GuaranteedScanner

Also cleans the Scanner impl.

* Remove non-small-order view key bound

Guaranteed addresses are in fact guaranteed even with this due to prefixing key
images causing zeroing the ECDH to not zero the shared key.

* Finish documenting monero-serai

* Correct imports for no-std

* Remove possible panic in monero-serai on systems < 32 bits

This was done by requiring the system's usize can represent a certain number.

* Restore the reserialize chain binary

* fmt, machete, GH CI

* Correct misc TODOs in monero-serai

* Have Monero test runner evaluate an Eventuality for all signed TXs

* Fix a pair of bugs in the decoy tests

Unfortunately, this test is still failing.

* Fix remaining bugs in monero-wallet tests

* Reject torsioned spend keys to ensure we can spend the outputs we scan

* Tidy inlined epee code in the RPC

* Correct the accidental swap of stagenet/testnet address bytes

* Remove unused dep from processor

* Handle Monero fee logic properly in the processor

* Document v2 TX/RCT output relation assumed when scanning

* Adjust how we mine the initial blocks due to some CI test failures

* Fix weight estimation for RctType::ClsagBulletproof TXs

* Again increase the amount of blocks we mine prior to running tests

* Correct the if check about when to mine blocks on start

Finally fixes the lack of decoy candidates failures in CI.

* Run Monero on Debian, even for internal testnets

Change made due to a segfault incurred when locally testing.

https://github.com/monero-project/monero/issues/9141 for the upstream.

* Don't attempt running tests on the verify-chain binary

Adds a minimum XMR fee to the processor and runs fmt.

* Increase minimum Monero fee in processor

I'm truly unsure why this is required right now.

* Distinguish fee from necessary_fee in monero-wallet

If there's no change, the fee is difference of the inputs to the outputs. The
prior code wouldn't check that amount is greater than or equal to the necessary
fee, and returning the would-be change amount as the fee isn't necessarily
helpful.

Now the fee is validated in such cases and the necessary fee is returned,
enabling operating off of that.

* Restore minimum Monero fee from develop
2024-07-07 06:57:18 -04:00
Luke Parker
703c6a2358 Typo corrections meant to be added in the prior commit 2024-07-04 02:13:34 -04:00
Luke Parker
52bb918cc9 Ensure TotalAllocatedStake is set for the first set 2024-07-04 02:11:04 -04:00
GitHub Actions
ba244e8090 Update nightly 2024-07-02 00:43:14 -04:00
akildemir
3e99d68cfe fix total allocated stake update in wrong time (#518)
* fix total allocated stake update in wrong time

* Restore mid-set increases

* Correct typo I introduced

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-06-24 07:41:25 -04:00
akildemir
4d9c2df38c Add coordinator rotation test (#535)
* add node side unit test

* complete rotation test for all networks

* set up the fast-epoch docker file

* fix pr comments

* add coordinator side rotation test

* bug fixes

* Remove EPOCH_INTERVAL

* Minor nits

* Add note on origin of publish_tx function in tests/coordinator

* Correct ThresholdParams assert_eq

* fmt

* Correct detection of handover completion

* Restore key gen message match from develop

It was modified in response to the handover completion bug, which has now been
resolved.

* bug fixes

* Correct invalid constant

* Typo fixes

* remove selecting participant to remove at random

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-06-21 08:39:17 -04:00
Luke Parker
8ab6f9c36e alloy 0.1 2024-06-19 12:39:47 -04:00
Luke Parker
253cf3253d Correct hash for 1.79.0-slim-bookworm docker image 2024-06-13 19:00:01 -04:00
Luke Parker
03445b3020 Update httparse, as 1.9.2 was yanked 2024-06-13 16:49:58 -04:00
Luke Parker
9af111b4aa Rust 1.79, cargo update 2024-06-13 15:57:08 -04:00
Luke Parker
41ce5b1738 Use the serai_abi::Call in the actual Transaction type
We prior required they had the same encoding, yet this ensures they do by
making them one and the same. This does require an large, ugly, From/TryInto
block which is deemed preferable for moving this more and more into syntax
(from semantics).

Further improvements (notably re: Extra) is possible, and this already lets us
strip some members from the Call enum.
2024-06-03 23:38:22 -04:00
Luke Parker
2a05cf3225 June 2024 nightly update
Replaces #571.
2024-06-01 21:46:49 -04:00
Luke Parker
f4147c39b2 bitcoin 0.32.1 2024-05-31 01:02:43 -04:00
rlking
cd69f3b9d6 Check if wasm was built by container exit code and state instead of local mountpoint (#570)
* Check if the serai wasm was built successfully by verifying the build container's status code and state, instead of checking the volume mountpoint locally

* Use a log statement for which wasm is used

* Minor typo fix

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-05-25 20:33:23 -04:00
Luke Parker
1d2beb3ee4 Ethereum relayer server
Causes send test to pass for the processor.
2024-05-22 18:50:11 -04:00
Luke Parker
ac709b2945 Correct processor docker tests encoding of Bitcoin addresses in OutInstructions 2024-05-21 08:49:57 -04:00
Luke Parker
a473800c26 More aggressive cargo update
Adds a few deps which are fine. Patches an old parking_lot(_core) version.
2024-05-21 08:07:32 -04:00
Luke Parker
09aac20293 Set the BufReader capacity to 0
Fixes issues with bitcoin.

We only use a BufReader as it's the only way to use a std::io::Read generic as
a bitcoin::io::Read object.
2024-05-21 07:06:13 -04:00
Luke Parker
f93214012d Use ScriptBuf over Address where possible 2024-05-21 06:44:59 -04:00
Luke Parker
400319cd29 cargo update
Also updates our gems
2024-05-21 06:09:04 -04:00
Luke Parker
a0a7d63dad bitcoin 0.32 2024-05-21 05:27:01 -04:00
Luke Parker
fb7d12ee6e Short-circuit test_no_deadlock_in_multisig_completed if preconditions not met 2024-05-21 03:20:44 -04:00
Luke Parker
11ec9e3535 Ethereum processor docker tests, barring send
We need the TX publication relay thingy for send to work (though that is the
point the test fails at).
2024-05-21 00:29:33 -04:00
Luke Parker
ae8a27b876 Add our own alloy meta module to deduplicate alloy prefixes 2024-05-14 01:42:18 -04:00
Luke Parker
af79586488 Fill out Ethereum functions in the processor Docker tests 2024-05-14 01:33:55 -04:00
Luke Parker
d27d93480a Get processor signer/wallet tests working for Ethereum
They are handicapped by the fact Ethereum self-sends don't show up as outputs,
yet that's fundamental (unless we add a *harmful* fallback function).
2024-05-11 00:11:14 -04:00
Luke Parker
02c4417a46 Update no_deadlock_in_multisig test to set the initial key in the DB 2024-05-10 15:57:05 -04:00
Luke Parker
79a79db399 Update dockertest specification 2024-05-10 15:50:07 -04:00
Luke Parker
0c9dd5048e Processor scanner tests for Ethereum 2024-05-10 14:06:43 -04:00
Luke Parker
5501de1f3a Update to the latest alloy
Also makes various tweaks as necessary.
2024-05-10 14:06:43 -04:00
GitHub Actions
21123590bb Update nightly 2024-05-01 01:10:58 -04:00
Luke Parker
bc1dec7991 Move TRANSACTION_MESSAGE to 1 2024-04-28 04:04:53 -04:00
Luke Parker
cef63a631a Add a dev ethereum Docker setup
Also adds untested Dockerfiles for reth, lighthouse, and nimbus.
2024-04-24 09:30:54 -04:00
Luke Parker
d57fef8999 Slight documentation tweaks 2024-04-24 03:55:23 -04:00
Luke Parker
d1474e9188 Route top-level transfers through to the processor 2024-04-24 03:38:31 -04:00
Luke Parker
b39c751403 Reduce target peers a bit 2024-04-23 12:59:45 -04:00
Luke Parker
cc7202e0bf Correct recv to try_recv when exhausting channel 2024-04-23 12:40:21 -04:00
Luke Parker
19e68f7f75 Correct selection of to-try peers to prevent infinite loops when to-try < target 2024-04-23 12:04:30 -04:00
Luke Parker
d94c9a4a5e Use a constant for the target amount of peer 2024-04-23 11:59:51 -04:00
Luke Parker
43dc036660 Use a HashSet for which networks to try peer finding for
Prevents a flood of retries from individually failed attempts within a batch of
peer connection attempts.
2024-04-23 10:55:56 -04:00
Luke Parker
95591218bb Remove cbor 2024-04-23 07:01:07 -04:00
Luke Parker
7dd587a864 Inline broadcast_raw now that it doesn't have multiple callers 2024-04-23 06:44:21 -04:00
Luke Parker
023275bcb6 Properly diversify ReqResMessageKind/GossipMessageKind 2024-04-23 06:37:41 -04:00
Luke Parker
8cef9eff6f Move keep alive, heartbeat, block to request/response 2024-04-23 05:44:58 -04:00
Luke Parker
b5e22dca8f Correct no-std Monero after moving from ToString to Display 2024-04-23 05:25:08 -04:00
Luke Parker
a41329c027 Update clippy now that redundant imports has been reverted 2024-04-23 04:31:27 -04:00
Luke Parker
a25e6330bd Remove DLEq proofs from CLSAG multisig
1) Removes the key image DLEq on the Monero side of things, as the produced
   signature share serves as a DLEq for it.
2) Removes the nonce DLEqs from modular-frost as they're unnecessary for
   monero-serai. Updates documentation accordingly.

Without the proof the nonces are internally consistent, the produced signatures
from modular-frost can be argued as a batch-verifiable CP93 DLEq (R0, R1, s),
or as a GSP for the CP93 DLEq statement (which naturally produces (R0, R1, s)).

The lack of proving the nonces consistent does make the process weaker, yet
it's also unnecessary for the class of protocols this is intended to service.
To provide DLEqs for the nonces would be to provide PoKs for the nonce
commitments (in the traditional Schnorr case).
2024-04-21 23:01:32 -04:00
Luke Parker
558a2bfa46 Slight tweaks to BP+ 2024-04-21 21:51:44 -04:00
Luke Parker
c73acb3d62 Log on new tendermint message debug -> trace 2024-04-21 19:28:21 -04:00
Luke Parker
933b17aa91 Revert coordinator/tributary to fd4f247917
\#560 is causing notable CI failures, with its logs including slashes at 10x
the prior rate.
2024-04-21 10:16:12 -04:00
Luke Parker
5fa7e3d450 Line for prior commit 2024-04-21 08:55:29 -04:00
Luke Parker
749d783b1e Comment the insanely aggressive timeout future trace log 2024-04-21 08:53:35 -04:00
Luke Parker
5a3ea80943 Add missing continue to prevent dialing a node we're connected to 2024-04-21 08:36:52 -04:00
Luke Parker
fddbebc7c0 Replace expect with debug log 2024-04-21 08:02:34 -04:00
Luke Parker
e01848aa9e Correct boolean NOT on is_fresh_dial 2024-04-21 07:30:31 -04:00
Luke Parker
320b5627b5 Retry if initial dials fail, not just upon disconnect 2024-04-21 07:26:16 -04:00
Luke Parker
be7780e69d Restart coordinator peer finding upon disconnections 2024-04-21 07:02:49 -04:00
Luke Parker
0ddbaefb38 Correct timing around when we verify precommit signatures 2024-04-21 06:12:01 -04:00
Luke Parker
0f0db14f05 Ethereum Integration (#557)
* Clean up Ethereum

* Consistent contract address for deployed contracts

* Flesh out Router a bit

* Add a Deployer for DoS-less deployment

* Implement Router-finding

* Use CREATE2 helper present in ethers

* Move from CREATE2 to CREATE

Bit more streamlined for our use case.

* Document ethereum-serai

* Tidy tests a bit

* Test updateSeraiKey

* Use encodePacked for updateSeraiKey

* Take in the block hash to read state during

* Add a Sandbox contract to the Ethereum integration

* Add retrieval of transfers from Ethereum

* Add inInstruction function to the Router

* Augment our handling of InInstructions events with a check the transfer event also exists

* Have the Deployer error upon failed deployments

* Add --via-ir

* Make get_transaction test-only

We only used it to get transactions to confirm the resolution of Eventualities.
Eventualities need to be modularized. By introducing the dedicated
confirm_completion function, we remove the need for a non-test get_transaction
AND begin this modularization (by no longer explicitly grabbing a transaction
to check with).

* Modularize Eventuality

Almost fully-deprecates the Transaction trait for Completion. Replaces
Transaction ID with Claim.

* Modularize the Scheduler behind a trait

* Add an extremely basic account Scheduler

* Add nonce uses, key rotation to the account scheduler

* Only report the account Scheduler empty after transferring keys

Also ban payments to the branch/change/forward addresses.

* Make fns reliant on state test-only

* Start of an Ethereum integration for the processor

* Add a session to the Router to prevent updateSeraiKey replaying

This would only happen if an old key was rotated to again, which would require
n-of-n collusion (already ridiculous and a valid fault attributable event). It
just clarifies the formal arguments.

* Add a RouterCommand + SignMachine for producing it to coins/ethereum

* Ethereum which compiles

* Have branch/change/forward return an option

Also defines a UtxoNetwork extension trait for MAX_INPUTS.

* Make external_address exclusively a test fn

* Move the "account" scheduler to "smart contract"

* Remove ABI artifact

* Move refund/forward Plan creation into the Processor

We create forward Plans in the scan path, and need to know their exact fees in
the scan path. This requires adding a somewhat wonky shim_forward_plan method
so we can obtain a Plan equivalent to the actual forward Plan for fee reasons,
yet don't expect it to be the actual forward Plan (which may be distinct if
the Plan pulls from the global state, such as with a nonce).

Also properly types a Scheduler addendum such that the SC scheduler isn't
cramming the nonce to use into the N::Output type.

* Flesh out the Ethereum integration more

* Two commits ago, into the **Scheduler, not Processor

* Remove misc TODOs in SC Scheduler

* Add constructor to RouterCommandMachine

* RouterCommand read, pairing with the prior added write

* Further add serialization methods

* Have the Router's key included with the InInstruction

This does not use the key at the time of the event. This uses the key at the
end of the block for the event. Its much simpler than getting the full event
streams for each, checking when they interlace.

This does not read the state. Every block, this makes a request for every
single key update and simply chooses the last one. This allows pruning state,
only keeping the event tree. Ideally, we'd also introduce a cache to reduce the
cost of the filter (small in events yielded, long in blocks searched).

Since Serai doesn't have any forwarding TXs, nor Branches, nor change, all of
our Plans should solely have payments out, and there's no expectation of a Plan
being made under one key broken by it being received by another key.

* Add read/write to InInstruction

* Abstract the ABI for Call/OutInstruction in ethereum-serai

* Fill out signable_transaction for Ethereum

* Move ethereum-serai to alloy

Resolves #331.

* Use the opaque sol macro instead of generated files

* Move the processor over to the now-alloy-based ethereum-serai

* Use the ecrecover provided by alloy

* Have the SC use nonce for rotation, not session (an independent nonce which wasn't synchronized)

* Always use the latest keys for SC scheduled plans

* get_eventuality_completions for Ethereum

* Finish fleshing out the processor Ethereum integration as needed for serai-processor tests

This doesn't not support any actual deployments, not even the ones simulated by
serai-processor-docker-tests.

* Add alloy-simple-request-transport to the GH workflows

* cargo update

* Clarify a few comments and make one check more robust

* Use a string for 27.0 in .github

* Remove optional from no-longer-optional dependencies in processor

* Add alloy to git deny exception

* Fix no longer optional specification in processor's binaries feature

* Use a version of foundry from 2024

* Correct fetching Bitcoin TXs in the processor docker tests

* Update rustls to resolve RUSTSEC warnings

* Use the monthly nightly foundry, not the deleted daily nightly
2024-04-21 06:02:12 -04:00
Luke Parker
43083dfd49 Remove redundant log from tendermint lib 2024-04-21 05:32:41 -04:00
Luke Parker
523d2ac911 Rewrite tendermint's message handling loop to much more clearly match the paper (#560)
* Rewrite tendermint's message handling loop to much more clearly match the paper

No longer checks relevant branches upon messages, yet all branches upon any
state change. This is slower, yet easier to review and likely without one or
two rare edge cases.

When reviewing, please see page 5 of https://arxiv.org/pdf/1807.04938.pdf.
Lines from the specified algorithm can be found in the code by searching for
"// L".

* Sane rebroadcasting of consensus messages

Instead of broadcasting the last n messages on the Tributary side of things, we
now have the machine rebroadcast the message tape for the current block.

* Only rebroadcast messages which didn't error in some way

* Only rebroadcast our own messages for tendermint
2024-04-21 05:30:31 -04:00
Luke Parker
fd4f247917 Correct log which didn't work as intended 2024-04-20 19:54:16 -04:00
Luke Parker
ac9e356af4 Correct log targets in tendermint-machine 2024-04-20 19:15:15 -04:00
Luke Parker
bba7d2a356 Better logs in tendermint-machine 2024-04-20 18:13:44 -04:00
Luke Parker
4c349ae605 Redo how tendermint-machine checks if messages were prior sent
Instead of saving, for every sent message, if it was sent or not, we track the
latest block/round participated in. These two keys are comprehensive to all
prior block/rounds. We then use three keys for the latest round's
proposal/prevote/precommit, enabling tracking current state as necessary to
prevent equivocations with just 5 keys.

The storage of the latest three messages also enables proper rebroadcasting of
the current round (not implemented in this commit).
2024-04-20 18:10:51 -04:00
Luke Parker
a4428761f7 Bitcoin 27.0 2024-04-19 08:00:17 -04:00
Luke Parker
940e9553fd Add missing crates to GH workflows 2024-04-19 06:12:33 -04:00
Luke Parker
593aefd229 Extend time in sync test 2024-04-18 02:51:38 -04:00
Luke Parker
5830c2463d fmt 2024-04-18 02:03:28 -04:00
Luke Parker
bcc88c3e86 Don't broadcast added blocks
Online validators should inherently have them. Offline validators will receive
from the sync protocol.

This does somewhat eliminate the class of nodes who would follow the blockchain
(without validating it), yet that's fine for the performance benefit.
2024-04-18 01:48:11 -04:00
Luke Parker
fea16df567 Only reply to heartbeats after a certain distance 2024-04-18 01:39:34 -04:00
Luke Parker
4960c3222e Ensure we don't reply to stale heartbeats 2024-04-18 01:24:38 -04:00
Luke Parker
6b4df4f2c0 Only have some nodes respond to latent heartbeats
Also only respond if they're more than 2 blocks behind to minimize redundant
sending of blocks.
2024-04-17 21:54:10 -04:00
Luke Parker
dac46c8d7d Correct comment in VS pallet 2024-04-12 20:38:31 -04:00
expiredhotdog
db2e8376df use multiscalar_mul for CLSAG (#553)
* use multiscalar_mul for CLSAG

* use multiscalar_mul for CLSAG signing

* use OnceLock for basepoint precomputation
2024-04-12 19:52:56 -04:00
Luke Parker
33dd412e67 Add bootnode code prior used in testnet-internal (#554)
* Add bootnode code prior used in testnet-internal

Also performs the devnet/testnet differentation done since the testnet branch.

* Fixes

* fmt
2024-04-12 00:38:40 -04:00
Luke Parker
fcad402186 cargo update
Resolves deny error caused by h2.
2024-04-10 06:34:01 -04:00
Boog900
ab4d79628d fix CLSAG verification.
We were not setting c1 to the last calculated c during verification, instead keeping it set to the one provided in the signature.
2024-04-10 05:59:06 -04:00
984 changed files with 60488 additions and 100786 deletions

View File

@@ -1,6 +1,6 @@
MIT License MIT License
Copyright (c) 2022-2023 Luke Parker Copyright (c) 2022-2025 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: 24.0.1 default: "30.0"
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Bitcoin Daemon Cache - name: Bitcoin Daemon Cache
id: cache-bitcoind id: cache-bitcoind
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: bitcoin.tar.gz path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
@@ -37,4 +37,4 @@ runs:
- name: Bitcoin Regtest Daemon - name: Bitcoin Regtest Daemon
shell: bash shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/coins/bitcoin/run.sh -daemon run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/bitcoin/run.sh -txindex -daemon

View File

@@ -7,13 +7,20 @@ runs:
- name: Remove unused packages - name: Remove unused packages
shell: bash shell: bash
run: | run: |
sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli # Ensure the repositories are synced
sudo apt update -y
# Actually perform the removals
sudo apt remove -y "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*" sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*" sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y --allow-remove-essential -f shim-signed *python3*
# This removal command requires the prior removals due to unmet dependencies otherwise
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*" sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
sudo apt clean # Reinstall python3 as a general dependency of a functional operating system
docker system prune -a --volumes sudo apt install -y python3 --fix-missing
if: runner.os == 'Linux' if: runner.os == 'Linux'
- name: Remove unused packages - name: Remove unused packages
@@ -31,19 +38,45 @@ runs:
shell: bash shell: bash
run: | run: |
if [ "$RUNNER_OS" == "Linux" ]; then if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler sudo apt install -y ca-certificates protobuf-compiler libclang-dev
elif [ "$RUNNER_OS" == "Windows" ]; then elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf brew install protobuf llvm
HOMEBREW_ROOT_PATH=/opt/homebrew # Apple Silicon
if [ $(uname -m) = "x86_64" ]; then HOMEBREW_ROOT_PATH=/usr/local; fi # Intel
ls $HOMEBREW_ROOT_PATH/opt/llvm/lib | grep "libclang.dylib" # Make sure this installed `libclang`
echo "DYLD_LIBRARY_PATH=$HOMEBREW_ROOT_PATH/opt/llvm/lib:$DYLD_LIBRARY_PATH" >> "$GITHUB_ENV"
fi fi
- name: Install solc - name: Install solc
shell: bash shell: bash
run: | run: |
cargo install svm-rs cargo +1.91.1 install svm-rs --version =0.5.21
svm install 0.8.25 svm install 0.8.29
svm use 0.8.25 svm use 0.8.29
# - name: Cache Rust - name: Remove preinstalled Docker
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43 shell: bash
run: |
docker system prune -a --volumes
sudo apt remove -y *docker*
# Install uidmap which will be required for the explicitly installed Docker
sudo apt install uidmap
if: runner.os == 'Linux'
- name: Update system dependencies
shell: bash
run: |
sudo apt update -y
sudo apt upgrade -y
sudo apt autoremove -y
sudo apt clean
if: runner.os == 'Linux'
- name: Install rootless Docker
uses: docker/setup-docker-action@e61617a16c407a86262fb923c35a616ddbe070b3 # 4.6.0
with:
rootless: true
set-host: true
if: runner.os == 'Linux'

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.3.1 default: v0.18.4.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Wallet RPC Cache - name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc id: cache-monero-wallet-rpc
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: monero-wallet-rpc path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}

View File

@@ -5,14 +5,14 @@ inputs:
version: version:
description: "Version to download and run" description: "Version to download and run"
required: false required: false
default: v0.18.3.1 default: v0.18.4.4
runs: runs:
using: "composite" using: "composite"
steps: steps:
- name: Monero Daemon Cache - name: Monero Daemon Cache
id: cache-monerod id: cache-monerod
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2 uses: actions/cache@0400d5f644dc74513175e3cd8d07132dd4860809 # 4.2.4
with: with:
path: /usr/bin/monerod path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }} key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
@@ -43,4 +43,4 @@ runs:
- name: Monero Regtest Daemon - name: Monero Regtest Daemon
shell: bash shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/coins/monero/run.sh --detach run: PATH=$PATH:/usr/bin ./orchestration/dev/networks/monero/run.sh --detach

View File

@@ -5,12 +5,12 @@ inputs:
monero-version: monero-version:
description: "Monero version to download and run as a regtest node" description: "Monero version to download and run as a regtest node"
required: false required: false
default: v0.18.3.1 default: v0.18.4.4
bitcoin-version: bitcoin-version:
description: "Bitcoin version to download and run as a regtest node" description: "Bitcoin version to download and run as a regtest node"
required: false required: false
default: 24.0.1 default: "30.0"
runs: runs:
using: "composite" using: "composite"
@@ -19,9 +19,9 @@ runs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Install Foundry - name: Install Foundry
uses: foundry-rs/foundry-toolchain@cb603ca0abb544f301eaed59ac0baf579aa6aecf uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with: with:
version: nightly-09fe3e041369a816365a020f715ad6f94dbce9f2 version: v1.5.0
cache: false cache: false
- name: Run a Monero Regtest Node - name: Run a Monero Regtest Node

View File

@@ -1 +1 @@
nightly-2024-02-07 nightly-2025-12-01

View File

@@ -17,7 +17,7 @@ jobs:
test-common: test-common:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -27,5 +27,8 @@ jobs:
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p std-shims \ -p std-shims \
-p zalloc \ -p zalloc \
-p patchable-async-sleep \
-p serai-db \ -p serai-db \
-p serai-env -p serai-env \
-p serai-task \
-p simple-request

View File

@@ -7,7 +7,7 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "message-queue/**" - "message-queue/**"
- "coordinator/**" - "coordinator/**"
- "orchestration/**" - "orchestration/**"
@@ -18,7 +18,7 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "message-queue/**" - "message-queue/**"
- "coordinator/**" - "coordinator/**"
- "orchestration/**" - "orchestration/**"
@@ -31,10 +31,10 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run coordinator Docker tests - name: Run coordinator Docker tests
run: cd tests/coordinator && GITHUB_CI=true RUST_BACKTRACE=1 cargo test run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-coordinator-tests

View File

@@ -19,7 +19,7 @@ jobs:
test-crypto: test-crypto:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -32,9 +32,17 @@ jobs:
-p dalek-ff-group \ -p dalek-ff-group \
-p minimal-ed448 \ -p minimal-ed448 \
-p ciphersuite \ -p ciphersuite \
-p ciphersuite-kp256 \
-p multiexp \ -p multiexp \
-p schnorr-signatures \ -p schnorr-signatures \
-p dleq \ -p prime-field \
-p short-weierstrass \
-p secq256k1 \
-p embedwards25519 \
-p dkg \ -p dkg \
-p dkg-recovery \
-p dkg-dealer \
-p dkg-musig \
-p dkg-evrf \
-p modular-frost \ -p modular-frost \
-p frost-schnorrkel -p frost-schnorrkel

View File

@@ -9,16 +9,10 @@ jobs:
name: Run cargo deny name: Run cargo deny
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo install --locked cargo-deny run: cargo +1.91.1 install cargo-deny --version =0.18.7
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check run: cargo deny -L error --all-features check --hide-inclusion-graph

View File

@@ -13,10 +13,10 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run Full Stack Docker tests - name: Run Full Stack Docker tests
run: cd tests/full-stack && GITHUB_CI=true RUST_BACKTRACE=1 cargo test run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-full-stack-tests

View File

@@ -11,11 +11,11 @@ jobs:
clippy: clippy:
strategy: strategy:
matrix: matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest] os: [ubuntu-latest, macos-15-intel, macos-latest, windows-latest]
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use - name: Get nightly version to use
id: nightly id: nightly
@@ -26,7 +26,7 @@ jobs:
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Install nightly rust - name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c clippy run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c clippy
- name: Run Clippy - name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
@@ -43,24 +43,18 @@ jobs:
deny: deny:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny - name: Install cargo deny
run: cargo install --locked cargo-deny run: cargo +1.91.1 install cargo-deny --version =0.18.7
- name: Run cargo deny - name: Run cargo deny
run: cargo deny -L error --all-features check run: cargo deny -L error --all-features check --hide-inclusion-graph
fmt: fmt:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Get nightly version to use - name: Get nightly version to use
id: nightly id: nightly
@@ -73,11 +67,137 @@ jobs:
- name: Run rustfmt - name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@50d5a8956f2e319df19e6b57539d7e2acb9f8c1e # 1.5.0
with:
version: v1.5.0
cache: false
- name: Run forge fmt
run: FOUNDRY_FMT_SORT_INPUTS=false FOUNDRY_FMT_LINE_LENGTH=100 FOUNDRY_FMT_TAB_WIDTH=2 FOUNDRY_FMT_BRACKET_SPACING=true FOUNDRY_FMT_INT_TYPES=preserve forge fmt --check $(find . -iname "*.sol")
machete: machete:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify all dependencies are in use - name: Verify all dependencies are in use
run: | run: |
cargo install cargo-machete cargo +1.91.1 install cargo-machete --version =0.9.1
cargo machete cargo +1.91.1 machete
msrv:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Verify claimed `rust-version`
shell: bash
run: |
cargo +1.91.1 install cargo-msrv --version =0.18.4
function check_msrv {
# We `cd` into the directory passed as the first argument, but will return to the
# directory called from.
return_to=$(pwd)
echo "Checking $1"
cd $1
# We then find the existing `rust-version` using `grep` (for the right line) and then a
# regex (to strip to just the major and minor version).
existing=$(cat ./Cargo.toml | grep "rust-version" | grep -Eo "[0-9]+\.[0-9]+")
# We then backup the `Cargo.toml`, allowing us to restore it after, saving time on future
# MSRV checks (as they'll benefit from immediately exiting if the queried version is less
# than the declared MSRV).
mv ./Cargo.toml ./Cargo.toml.bak
# We then use an inverted (`-v`) grep to remove the existing `rust-version` from the
# `Cargo.toml`, as required because else earlier versions of Rust won't even attempt to
# compile this crate.
cat ./Cargo.toml.bak | grep -v "rust-version" > Cargo.toml
# We then find the actual `rust-version` using `cargo-msrv` (again stripping to just the
# major and minor version).
actual=$(cargo msrv find --output-format minimal | grep -Eo "^[0-9]+\.[0-9]+")
# Finally, we compare the two.
echo "Declared rust-version: $existing"
echo "Actual rust-version: $actual"
[ $existing == $actual ]
result=$?
# Restore the original `Cargo.toml`.
rm Cargo.toml
mv ./Cargo.toml.bak ./Cargo.toml
# Return to the directory called from and return the result.
cd $return_to
return $result
}
# Check each member of the workspace
function check_workspace {
# Get the members array from the workspace's `Cargo.toml`
cargo_toml_lines=$(cat ./Cargo.toml | wc -l)
# Keep all lines after the start of the array, then keep all lines before the next "]"
members=$(cat Cargo.toml | grep "members\ \=\ \[" -m1 -A$cargo_toml_lines | grep "]" -m1 -B$cargo_toml_lines)
# Parse out any comments, whitespace, including comments post-fixed on the same line as an entry
# We accomplish the latter by pruning all characters after the entry's ","
members=$(echo "$members" | grep -Ev "^[[:space:]]*(#|$)" | awk -F',' '{print $1","}')
# Replace the first line, which was "members = [" and is now "members = [,", with "["
members=$(echo "$members" | sed "1s/.*/\[/")
# Correct the last line, which was malleated to "],"
members=$(echo "$members" | sed "$(echo "$members" | wc -l)s/\]\,/\]/")
# Don't check the following
# Most of these are binaries, with the exception of the Substrate runtime which has a
# bespoke build pipeline
members=$(echo "$members" | grep -v "networks/ethereum/relayer\"")
members=$(echo "$members" | grep -v "message-queue\"")
members=$(echo "$members" | grep -v "processor/bin\"")
members=$(echo "$members" | grep -v "processor/bitcoin\"")
members=$(echo "$members" | grep -v "processor/ethereum\"")
members=$(echo "$members" | grep -v "processor/monero\"")
members=$(echo "$members" | grep -v "coordinator\"")
members=$(echo "$members" | grep -v "substrate/runtime\"")
members=$(echo "$members" | grep -v "substrate/node\"")
members=$(echo "$members" | grep -v "orchestration\"")
# Don't check the tests
members=$(echo "$members" | grep -v "mini\"")
members=$(echo "$members" | grep -v "tests/")
# Remove the trailing comma by replacing the last line's "," with ""
members=$(echo "$members" | sed "$(($(echo "$members" | wc -l) - 1))s/\,//")
echo $members | jq -r ".[]" | while read -r member; do
check_msrv $member
correct=$?
if [ $correct -ne 0 ]; then
return $correct
fi
done
}
check_workspace
slither:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Slither
run: |
python3 -m pip install slither-analyzer==0.11.3
slither ./networks/ethereum/schnorr/contracts/Schnorr.sol
slither --include-paths ./networks/ethereum/schnorr/contracts ./networks/ethereum/schnorr/contracts/tests/Schnorr.sol
slither processor/ethereum/deployer/contracts/Deployer.sol
slither processor/ethereum/erc20/contracts/IERC20.sol
cp networks/ethereum/schnorr/contracts/Schnorr.sol processor/ethereum/router/contracts/
cp processor/ethereum/erc20/contracts/IERC20.sol processor/ethereum/router/contracts/
cd processor/ethereum/router/contracts
slither Router.sol

View File

@@ -27,10 +27,10 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run message-queue Docker tests - name: Run message-queue Docker tests
run: cd tests/message-queue && GITHUB_CI=true RUST_BACKTRACE=1 cargo test run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-message-queue-tests

View File

@@ -17,7 +17,7 @@ jobs:
test-common: test-common:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies

View File

@@ -1,56 +0,0 @@
name: Monero Tests
on:
push:
branches:
- develop
paths:
- "coins/monero/**"
- "processor/**"
pull_request:
paths:
- "coins/monero/**"
- "processor/**"
workflow_dispatch:
jobs:
# Only run these once since they will be consistent regardless of any node
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Unit Tests Without Features
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
# Doesn't run unit tests with features as the tests workflow will
integration-tests:
runs-on: ubuntu-latest
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.2.0]
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
# Runs with the binaries feature so the binaries build
# https://github.com/rust-lang/cargo/issues/8396
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --features binaries --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.2.0' }}
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'

View File

@@ -9,7 +9,7 @@ jobs:
name: Update nightly name: Update nightly
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
with: with:
submodules: "recursive" submodules: "recursive"

View File

@@ -1,4 +1,4 @@
name: coins/ Tests name: networks/ Tests
on: on:
push: push:
@@ -7,21 +7,21 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
pull_request: pull_request:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
workflow_dispatch: workflow_dispatch:
jobs: jobs:
test-coins: test-networks:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Test Dependencies - name: Test Dependencies
uses: ./.github/actions/test-dependencies uses: ./.github/actions/test-dependencies
@@ -30,6 +30,7 @@ jobs:
run: | run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \ -p bitcoin-serai \
-p ethereum-serai \ -p build-solidity-contracts \
-p monero-generators \ -p ethereum-schnorr-contract \
-p monero-serai -p alloy-simple-request-transport \
-p serai-ethereum-relayer \

View File

@@ -7,14 +7,14 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "tests/no-std/**" - "tests/no-std/**"
pull_request: pull_request:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "tests/no-std/**" - "tests/no-std/**"
workflow_dispatch: workflow_dispatch:
@@ -23,13 +23,23 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install RISC-V Toolchain - name: Install RISC-V Toolchain
run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf run: |
sudo apt update
sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal --component rust-src --target riscv32imac-unknown-none-elf
- name: Verify no-std builds - name: Verify no-std builds
run: cd tests/no-std && CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf run: |
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core -p serai-no-std-tests
CFLAGS=-I/usr/include cargo +${{ steps.nightly.outputs.version }} build --target riscv32imac-unknown-none-elf -Z build-std=core,alloc -p serai-no-std-tests --features "alloc"

View File

@@ -1,6 +1,7 @@
# MIT License # MIT License
# #
# Copyright (c) 2022 just-the-docs # Copyright (c) 2022 just-the-docs
# Copyright (c) 2022-2024 Luke Parker
# #
# Permission is hereby granted, free of charge, to any person obtaining a copy # Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal # of this software and associated documentation files (the "Software"), to deal
@@ -20,31 +21,21 @@
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE. # SOFTWARE.
# This workflow uses actions that are not certified by GitHub. name: Deploy Rust docs and Jekyll site to Pages
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll site to Pages
on: on:
push: push:
branches: branches:
- "develop" - "develop"
paths:
- "docs/**"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch: workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions: permissions:
contents: read contents: read
pages: write pages: write
id-token: write id-token: write
# Allow one concurrent deployment # Only allow one concurrent deployment
concurrency: concurrency:
group: "pages" group: "pages"
cancel-in-progress: true cancel-in-progress: true
@@ -53,27 +44,37 @@ jobs:
# Build job # Build job
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v3 uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Setup Ruby - name: Setup Ruby
uses: ruby/setup-ruby@v1 uses: ruby/setup-ruby@8aeb6ff8030dd539317f8e1769a044873b56ea71 # 1.268.0
with: with:
bundler-cache: true bundler-cache: true
cache-version: 0 cache-version: 0
working-directory: "${{ github.workspace }}/docs" working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages - name: Setup Pages
id: pages id: pages
uses: actions/configure-pages@v3 uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # 5.0.0
- name: Build with Jekyll - name: Build with Jekyll
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}" run: cd ${{ github.workspace }}/docs && bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env: env:
JEKYLL_ENV: production JEKYLL_ENV: production
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Buld Rust docs
run: |
rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32v1-none -c rust-docs
RUSTDOCFLAGS="--cfg docsrs" cargo +${{ steps.nightly.outputs.version }} doc --workspace --no-deps --all-features
mv target/doc docs/_site/rust
- name: Upload artifact - name: Upload artifact
uses: actions/upload-pages-artifact@v1 uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # 4.0.0
with: with:
path: "docs/_site/" path: "docs/_site/"
@@ -87,4 +88,4 @@ jobs:
steps: steps:
- name: Deploy to GitHub Pages - name: Deploy to GitHub Pages
id: deployment id: deployment
uses: actions/deploy-pages@v2 uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # 4.0.5

View File

@@ -7,7 +7,7 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "message-queue/**" - "message-queue/**"
- "processor/**" - "processor/**"
- "orchestration/**" - "orchestration/**"
@@ -18,7 +18,7 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "message-queue/**" - "message-queue/**"
- "processor/**" - "processor/**"
- "orchestration/**" - "orchestration/**"
@@ -31,10 +31,10 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run processor Docker tests - name: Run processor Docker tests
run: cd tests/processor && GITHUB_CI=true RUST_BACKTRACE=1 cargo test run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-processor-tests

View File

@@ -27,10 +27,10 @@ jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Install Build Dependencies - name: Install Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run Reproducible Runtime tests - name: Run Reproducible Runtime tests
run: cd tests/reproducible-runtime && GITHUB_CI=true RUST_BACKTRACE=1 cargo test run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-reproducible-runtime-tests -- --nocapture

View File

@@ -7,7 +7,7 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "message-queue/**" - "message-queue/**"
- "processor/**" - "processor/**"
- "coordinator/**" - "coordinator/**"
@@ -17,7 +17,7 @@ on:
paths: paths:
- "common/**" - "common/**"
- "crypto/**" - "crypto/**"
- "coins/**" - "networks/**"
- "message-queue/**" - "message-queue/**"
- "processor/**" - "processor/**"
- "coordinator/**" - "coordinator/**"
@@ -29,7 +29,7 @@ jobs:
test-infra: test-infra:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -39,16 +39,42 @@ jobs:
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-message-queue \ -p serai-message-queue \
-p serai-processor-messages \ -p serai-processor-messages \
-p serai-processor \ -p serai-processor-key-gen \
-p serai-processor-view-keys \
-p serai-processor-frost-attempt-manager \
-p serai-processor-primitives \
-p serai-processor-scanner \
-p serai-processor-scheduler-primitives \
-p serai-processor-utxo-scheduler-primitives \
-p serai-processor-utxo-scheduler \
-p serai-processor-transaction-chaining-scheduler \
-p serai-processor-smart-contract-scheduler \
-p serai-processor-signers \
-p serai-processor-bin \
-p serai-bitcoin-processor \
-p serai-processor-ethereum-primitives \
-p serai-processor-ethereum-test-primitives \
-p serai-processor-ethereum-deployer \
-p serai-processor-ethereum-router \
-p serai-processor-ethereum-erc20 \
-p serai-ethereum-processor \
-p serai-monero-processor \
-p tendermint-machine \ -p tendermint-machine \
-p tributary-chain \ -p tributary-sdk \
-p serai-cosign-types \
-p serai-cosign \
-p serai-coordinator-substrate \
-p serai-coordinator-tributary \
-p serai-coordinator-p2p \
-p serai-coordinator-libp2p-p2p \
-p serai-coordinator \ -p serai-coordinator \
-p serai-orchestrator \
-p serai-docker-tests -p serai-docker-tests
test-substrate: test-substrate:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
@@ -57,24 +83,33 @@ jobs:
run: | run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \ GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \ -p serai-primitives \
-p serai-coins-primitives \ -p serai-abi \
-p substrate-median \
-p serai-core-pallet \
-p serai-coins-pallet \ -p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \ -p serai-validator-sets-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-pallet \ -p serai-signals-pallet \
-p serai-dex-pallet \
-p serai-genesis-liquidity-pallet \
-p serai-economic-security-pallet \
-p serai-emissions-pallet \
-p serai-in-instructions-pallet \
-p serai-runtime \ -p serai-runtime \
-p serai-node -p serai-node
-p serai-substrate-tests
test-serai-client: test-serai-client:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # 6.0.0
- name: Build Dependencies - name: Build Dependencies
uses: ./.github/actions/build-dependencies uses: ./.github/actions/build-dependencies
- name: Run Tests - name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-serai
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-bitcoin
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-ethereum
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client-monero
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

8
.gitignore vendored
View File

@@ -1,7 +1,13 @@
target target
# Don't commit any `Cargo.lock` which aren't the workspace's
Cargo.lock
!/Cargo.lock
# Don't commit any `Dockerfile`, as they're auto-generated, except the only one which isn't
Dockerfile Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile !orchestration/runtime/Dockerfile
.test-logs .test-logs
.vscode .vscode

9402
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,23 +1,12 @@
[workspace] [workspace]
resolver = "2" resolver = "2"
members = [ members = [
# Version patches
"patches/zstd",
"patches/rocksdb",
"patches/proc-macro-crate",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
"common/std-shims", "common/std-shims",
"common/zalloc", "common/zalloc",
"common/patchable-async-sleep",
"common/db", "common/db",
"common/env", "common/env",
"common/task",
"common/request", "common/request",
"crypto/transcript", "crypto/transcript",
@@ -26,48 +15,90 @@ members = [
"crypto/dalek-ff-group", "crypto/dalek-ff-group",
"crypto/ed448", "crypto/ed448",
"crypto/ciphersuite", "crypto/ciphersuite",
"crypto/ciphersuite/kp256",
"crypto/multiexp", "crypto/multiexp",
"crypto/schnorr", "crypto/schnorr",
"crypto/dleq",
"crypto/prime-field",
"crypto/short-weierstrass",
"crypto/secq256k1",
"crypto/embedwards25519",
"crypto/dkg", "crypto/dkg",
"crypto/dkg/recovery",
"crypto/dkg/dealer",
"crypto/dkg/musig",
"crypto/dkg/evrf",
"crypto/frost", "crypto/frost",
"crypto/schnorrkel", "crypto/schnorrkel",
"coins/bitcoin", "networks/bitcoin",
"coins/ethereum",
"coins/monero/generators", "networks/ethereum/build-contracts",
"coins/monero", "networks/ethereum/schnorr",
"networks/ethereum/alloy-simple-request-transport",
"networks/ethereum/relayer",
"message-queue", "message-queue",
"processor/messages", "processor/messages",
"processor",
"coordinator/tributary/tendermint", "processor/key-gen",
"processor/view-keys",
"processor/frost-attempt-manager",
"processor/primitives",
"processor/scanner",
"processor/scheduler/primitives",
"processor/scheduler/utxo/primitives",
"processor/scheduler/utxo/standard",
"processor/scheduler/utxo/transaction-chaining",
"processor/scheduler/smart-contract",
"processor/signers",
"processor/bin",
"processor/bitcoin",
"processor/ethereum/primitives",
"processor/ethereum/test-primitives",
"processor/ethereum/deployer",
"processor/ethereum/erc20",
"processor/ethereum/router",
"processor/ethereum",
"processor/monero",
"coordinator/tributary-sdk/tendermint",
"coordinator/tributary-sdk",
"coordinator/cosign/types",
"coordinator/cosign",
"coordinator/substrate",
"coordinator/tributary", "coordinator/tributary",
"coordinator/p2p",
"coordinator/p2p/libp2p",
"coordinator", "coordinator",
"substrate/primitives", "substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi", "substrate/abi",
"substrate/median",
"substrate/core",
"substrate/coins",
"substrate/validator-sets",
"substrate/signals",
"substrate/dex",
"substrate/genesis-liquidity",
"substrate/economic-security",
"substrate/emissions",
"substrate/in-instructions",
"substrate/runtime", "substrate/runtime",
"substrate/node", "substrate/node",
"substrate/client/serai",
"substrate/client/bitcoin",
"substrate/client/ethereum",
"substrate/client/monero",
"substrate/client", "substrate/client",
"orchestration", "orchestration",
@@ -78,51 +109,96 @@ members = [
"tests/docker", "tests/docker",
"tests/message-queue", "tests/message-queue",
"tests/processor", # TODO "tests/processor",
"tests/coordinator", # TODO "tests/coordinator",
"tests/full-stack", "tests/substrate",
# TODO "tests/full-stack",
"tests/reproducible-runtime", "tests/reproducible-runtime",
] ]
[profile.dev.package]
# Always compile Monero (and a variety of dependencies) with optimizations due # Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs # to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 } subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
sha3 = { opt-level = 3 }
blake2 = { opt-level = 3 }
ff = { opt-level = 3 } ff = { opt-level = 3 }
group = { opt-level = 3 } group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 } crypto-bigint = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 } dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 } multiexp = { opt-level = 3 }
monero-serai = { opt-level = 3 } monero-io = { opt-level = 3 }
monero-primitives = { opt-level = 3 }
monero-ed25519 = { opt-level = 3 }
monero-mlsag = { opt-level = 3 }
monero-clsag = { opt-level = 3 }
monero-borromean = { opt-level = 3 }
monero-bulletproofs-generators = { opt-level = 3 }
monero-bulletproofs = {opt-level = 3 }
monero-oxide = { opt-level = 3 }
# Always compile the eVRF DKG tree with optimizations as well
secp256k1 = { opt-level = 3 }
secq256k1 = { opt-level = 3 }
embedwards25519 = { opt-level = 3 }
generalized-bulletproofs = { opt-level = 3 }
generalized-bulletproofs-circuit-abstraction = { opt-level = 3 }
generalized-bulletproofs-ec-gadgets = { opt-level = 3 }
# revm also effectively requires being built with optimizations
revm = { opt-level = 3 }
revm-bytecode = { opt-level = 3 }
revm-context = { opt-level = 3 }
revm-context-interface = { opt-level = 3 }
revm-database = { opt-level = 3 }
revm-database-interface = { opt-level = 3 }
revm-handler = { opt-level = 3 }
revm-inspector = { opt-level = 3 }
revm-interpreter = { opt-level = 3 }
revm-precompile = { opt-level = 3 }
revm-primitives = { opt-level = 3 }
revm-state = { opt-level = 3 }
[profile.release] [profile.release]
panic = "unwind" panic = "unwind"
overflow-checks = true
[patch.crates-io] [patch.crates-io]
# Point to empty crates for crates unused within in our tree
alloy-eip2124 = { path = "patches/ethereum/alloy-eip2124" }
ark-ff-3 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.3" }
ark-ff-4 = { package = "ark-ff", path = "patches/ethereum/ark-ff-0.4" }
c-kzg = { path = "patches/ethereum/c-kzg" }
secp256k1-30 = { package = "secp256k1", path = "patches/ethereum/secp256k1-30" }
# Dependencies from monero-oxide which originate from within our own tree, potentially shimmed to account for deviations since publishing
std-shims = { path = "patches/std-shims" }
simple-request = { path = "patches/simple-request" }
multiexp = { path = "crypto/multiexp" }
flexible-transcript = { path = "crypto/transcript" }
ciphersuite = { path = "patches/ciphersuite" }
dalek-ff-group = { path = "patches/dalek-ff-group" }
minimal-ed448 = { path = "crypto/ed448" }
modular-frost = { path = "crypto/frost" }
# Patch due to `std` now including the required functionality
is_terminal_polyfill = { path = "./patches/is_terminal_polyfill" }
# This has a non-deprecated `std` alternative since Rust's 2024 edition
home = { path = "patches/home" }
# Updates to the latest version
darling = { path = "patches/darling" }
thiserror = { path = "patches/thiserror" }
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201 # https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" } lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# Needed due to dockertest's usage of `Rc`s when we need `Arc`s
dockertest = { git = "https://github.com/kayabaNerve/dockertest-rs", branch = "arc" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# proc-macro-crate 2 binds to an old version of toml for msrv so we patch to 3
proc-macro-crate = { path = "patches/proc-macro-crate" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
matches = { path = "patches/matches" }
# directories-next was created because directories was unmaintained # directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained # directories-next is now unmaintained while directories is maintained
# The directories author pulls in ridiculously pointless crates and prefers # The directories author pulls in ridiculously pointless crates and prefers
@@ -131,8 +207,19 @@ matches = { path = "patches/matches" }
option-ext = { path = "patches/option-ext" } option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" } directories-next = { path = "patches/directories-next" }
# Patch from a fork back to upstream
parity-bip39 = { path = "patches/parity-bip39" }
# Patch to include `FromUniformBytes<64>` over `Scalar`
k256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
p256 = { git = "https://github.com/kayabaNerve/elliptic-curves", rev = "4994c9ab163781a88cd4a49beae812a89a44e8c3" }
[workspace.lints.clippy] [workspace.lints.clippy]
incompatible_msrv = "allow" # Manually verified with a GitHub workflow
manual_is_multiple_of = "allow"
unwrap_or_default = "allow" unwrap_or_default = "allow"
map_unwrap_or = "allow"
needless_continue = "allow"
borrow_as_ptr = "deny" borrow_as_ptr = "deny"
cast_lossless = "deny" cast_lossless = "deny"
cast_possible_truncation = "deny" cast_possible_truncation = "deny"
@@ -160,11 +247,9 @@ manual_instant_elapsed = "deny"
manual_let_else = "deny" manual_let_else = "deny"
manual_ok_or = "deny" manual_ok_or = "deny"
manual_string_new = "deny" manual_string_new = "deny"
map_unwrap_or = "deny"
match_bool = "deny" match_bool = "deny"
match_same_arms = "deny" match_same_arms = "deny"
missing_fields_in_debug = "deny" missing_fields_in_debug = "deny"
needless_continue = "deny"
needless_pass_by_value = "deny" needless_pass_by_value = "deny"
ptr_cast_constness = "deny" ptr_cast_constness = "deny"
range_minus_one = "deny" range_minus_one = "deny"
@@ -172,7 +257,8 @@ range_plus_one = "deny"
redundant_closure_for_method_calls = "deny" redundant_closure_for_method_calls = "deny"
redundant_else = "deny" redundant_else = "deny"
string_add_assign = "deny" string_add_assign = "deny"
unchecked_duration_subtraction = "deny" string_slice = "deny"
unchecked_time_subtraction = "deny"
uninlined_format_args = "deny" uninlined_format_args = "deny"
unnecessary_box_returns = "deny" unnecessary_box_returns = "deny"
unnecessary_join = "deny" unnecessary_join = "deny"
@@ -181,3 +267,6 @@ unnested_or_patterns = "deny"
unused_async = "deny" unused_async = "deny"
unused_self = "deny" unused_self = "deny"
zero_sized_map_values = "deny" zero_sized_map_values = "deny"
[workspace.lints.rust]
unused = "allow" # TODO: https://github.com/rust-lang/rust/issues/147648

View File

@@ -5,4 +5,4 @@ a full copy of the AGPL-3.0 License is included in the root of this repository
as a reference text. This copy should be provided with any distribution of a as a reference text. This copy should be provided with any distribution of a
crate licensed under the AGPL-3.0, as per its terms. crate licensed under the AGPL-3.0, as per its terms.
The GitHub actions (`.github/actions`) are licensed under the MIT license. The GitHub actions/workflows (`.github`) are licensed under the MIT license.

View File

@@ -24,7 +24,7 @@ wallet.
infrastructure, to our IETF-compliant FROST implementation, to a DLEq proof as infrastructure, to our IETF-compliant FROST implementation, to a DLEq proof as
needed for Bitcoin-Monero atomic swaps. needed for Bitcoin-Monero atomic swaps.
- `coins`: Various coin libraries intended for usage in Serai yet also by the - `networks`: Various libraries intended for usage in Serai yet also by the
wider community. This means they will always support the functionality Serai wider community. This means they will always support the functionality Serai
needs, yet won't disadvantage other use cases when possible. needs, yet won't disadvantage other use cases when possible.
@@ -59,7 +59,6 @@ issued at the discretion of the Immunefi program managers.
- [Website](https://serai.exchange/): https://serai.exchange/ - [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/ - [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX - [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz - [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org - [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
- [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/ - [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/

View File

@@ -1,6 +0,0 @@
# Cypher Stack /coins/bitcoin Audit, August 2023
This audit was over the /coins/bitcoin folder. It is encompassing up to commit
5121ca75199dff7bd34230880a1fdd793012068c.
Please see https://github.com/cypherstack/serai-btc-audit for provenance.

View File

@@ -0,0 +1,7 @@
# Cypher Stack /networks/bitcoin Audit, August 2023
This audit was over the `/networks/bitcoin` folder (at the time located at
`/coins/bitcoin`). It is encompassing up to commit
5121ca75199dff7bd34230880a1fdd793012068c.
Please see https://github.com/cypherstack/serai-btc-audit for provenance.

View File

@@ -0,0 +1,14 @@
# Trail of Bits Ethereum Contracts Audit, June 2025
This audit included:
- Our Schnorr contract and associated library (/networks/ethereum/schnorr)
- Our Ethereum primitives library (/processor/ethereum/primitives)
- Our Deployer contract and associated library (/processor/ethereum/deployer)
- Our ERC20 library (/processor/ethereum/erc20)
- Our Router contract and associated library (/processor/ethereum/router)
It is encompassing up to commit 4e0c58464fc4673623938335f06e2e9ea96ca8dd.
Please see
https://github.com/trailofbits/publications/blob/30c4fa3ebf39ff8e4d23ba9567344ec9691697b5/reviews/2025-04-serai-dex-security-review.pdf
for the actual report.

View File

@@ -0,0 +1,50 @@
# eVRF DKG
In 2024, the [eVRF paper](https://eprint.iacr.org/2024/397) was published to
the IACR preprint server. Within it was a one-round unbiased DKG and a
one-round unbiased threshold DKG. Unfortunately, both simply describe
communication of the secret shares as 'Alice sends $s_b$ to Bob'. This causes,
in practice, the need for an additional round of communication to occur where
all participants confirm they received their secret shares.
Within Serai, it was posited to use the same premises as the DDH eVRF itself to
achieve a verifiable encryption scheme. This allows the secret shares to be
posted to any 'bulletin board' (such as a blockchain) and for all observers to
confirm:
- A participant participated
- The secret shares sent can be received by the intended recipient so long as
they can access the bulletin board
Additionally, Serai desired a robust scheme (albeit with an biased key as the
output, which is fine for our purposes). Accordingly, our implementation
instantiates the threshold eVRF DKG from the eVRF paper, with our own proposal
for verifiable encryption, with the caller allowed to decide the set of
participants. They may:
- Select everyone, collapsing to the non-threshold unbiased DKG from the eVRF
paper
- Select a pre-determined set, collapsing to the threshold unbaised DKG from
the eVRF paper
- Select a post-determined set (with any solution for the Common Subset
problem), allowing achieving a robust threshold biased DKG
Note that the eVRF paper proposes using the eVRF to sample coefficients yet
this is unnecessary when the resulting key will be biased. Any proof of
knowledge for the coefficients, as necessary for their extraction within the
security proofs, would be sufficient.
MAGIC Grants contracted HashCloak to formalize Serai's proposal for a DKG and
provide proofs for its security. This resulted in
[this paper](<./Security Proofs.pdf>).
Our implementation itself is then built on top of the audited
[`generalized-bulletproofs`](https://github.com/kayabaNerve/monero-oxide/tree/generalized-bulletproofs/audits/crypto/generalized-bulletproofs)
and
[`generalized-bulletproofs-ec-gadgets`](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/fcmps).
Note we do not use the originally premised DDH eVRF yet the one premised on
elliptic curve divisors, the methodology of which is commented on
[here](https://github.com/monero-oxide/monero-oxide/tree/fcmp%2B%2B/audits/divisors).
Our implementation itself is unaudited at this time however.

Binary file not shown.

View File

@@ -1,166 +0,0 @@
use k256::{
elliptic_curve::sec1::{Tag, ToEncodedPoint},
ProjectivePoint,
};
use bitcoin::key::XOnlyPublicKey;
/// Get the x coordinate of a non-infinity, even point. Panics on invalid input.
pub fn x(key: &ProjectivePoint) -> [u8; 32] {
let encoded = key.to_encoded_point(true);
assert_eq!(encoded.tag(), Tag::CompressedEvenY, "x coordinate of odd key");
(*encoded.x().expect("point at infinity")).into()
}
/// Convert a non-infinity even point to a XOnlyPublicKey. Panics on invalid input.
pub fn x_only(key: &ProjectivePoint) -> XOnlyPublicKey {
XOnlyPublicKey::from_slice(&x(key)).expect("x_only was passed a point which was infinity or odd")
}
/// Make a point even by adding the generator until it is even.
///
/// Returns the even point and the amount of additions required.
#[cfg(any(feature = "std", feature = "hazmat"))]
pub fn make_even(mut key: ProjectivePoint) -> (ProjectivePoint, u64) {
let mut c = 0;
while key.to_encoded_point(true).tag() == Tag::CompressedOddY {
key += ProjectivePoint::GENERATOR;
c += 1;
}
(key, c)
}
#[cfg(feature = "std")]
mod frost_crypto {
use core::fmt::Debug;
use std_shims::{vec::Vec, io};
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use bitcoin::hashes::{HashEngine, Hash, sha256::Hash as Sha256};
use transcript::Transcript;
use k256::{elliptic_curve::ops::Reduce, U256, Scalar};
use frost::{
curve::{Ciphersuite, Secp256k1},
Participant, ThresholdKeys, ThresholdView, FrostError,
algorithm::{Hram as HramTrait, Algorithm, Schnorr as FrostSchnorr},
};
use super::*;
/// A BIP-340 compatible HRAm for use with the modular-frost Schnorr Algorithm.
///
/// If passed an odd nonce, it will have the generator added until it is even.
///
/// If the key is odd, this will panic.
#[derive(Clone, Copy, Debug)]
pub struct Hram;
#[allow(non_snake_case)]
impl HramTrait<Secp256k1> for Hram {
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
// Convert the nonce to be even
let (R, _) = make_even(*R);
const TAG_HASH: Sha256 = Sha256::const_hash(b"BIP0340/challenge");
let mut data = Sha256::engine();
data.input(TAG_HASH.as_ref());
data.input(TAG_HASH.as_ref());
data.input(&x(&R));
data.input(&x(A));
data.input(m);
Scalar::reduce(U256::from_be_slice(Sha256::from_engine(data).as_ref()))
}
}
/// BIP-340 Schnorr signature algorithm.
///
/// This must be used with a ThresholdKeys whose group key is even. If it is odd, this will panic.
#[derive(Clone)]
pub struct Schnorr<T: Sync + Clone + Debug + Transcript>(FrostSchnorr<Secp256k1, T, Hram>);
impl<T: Sync + Clone + Debug + Transcript> Schnorr<T> {
/// Construct a Schnorr algorithm continuing the specified transcript.
pub fn new(transcript: T) -> Schnorr<T> {
Schnorr(FrostSchnorr::new(transcript))
}
}
impl<T: Sync + Clone + Debug + Transcript> Algorithm<Secp256k1> for Schnorr<T> {
type Transcript = T;
type Addendum = ();
type Signature = [u8; 64];
fn transcript(&mut self) -> &mut Self::Transcript {
self.0.transcript()
}
fn nonces(&self) -> Vec<Vec<ProjectivePoint>> {
self.0.nonces()
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Secp256k1>,
) {
self.0.preprocess_addendum(rng, keys)
}
fn read_addendum<R: io::Read>(&self, reader: &mut R) -> io::Result<Self::Addendum> {
self.0.read_addendum(reader)
}
fn process_addendum(
&mut self,
view: &ThresholdView<Secp256k1>,
i: Participant,
addendum: (),
) -> Result<(), FrostError> {
self.0.process_addendum(view, i, addendum)
}
fn sign_share(
&mut self,
params: &ThresholdView<Secp256k1>,
nonce_sums: &[Vec<<Secp256k1 as Ciphersuite>::G>],
nonces: Vec<Zeroizing<<Secp256k1 as Ciphersuite>::F>>,
msg: &[u8],
) -> <Secp256k1 as Ciphersuite>::F {
self.0.sign_share(params, nonce_sums, nonces, msg)
}
#[must_use]
fn verify(
&self,
group_key: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
sum: Scalar,
) -> Option<Self::Signature> {
self.0.verify(group_key, nonces, sum).map(|mut sig| {
// Make the R of the final signature even
let offset;
(sig.R, offset) = make_even(sig.R);
// s = r + cx. Since we added to the r, add to s
sig.s += Scalar::from(offset);
// Convert to a Bitcoin signature by dropping the byte for the point's sign bit
sig.serialize()[1 ..].try_into().unwrap()
})
}
fn verify_share(
&self,
verification_share: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
share: Scalar,
) -> Result<Vec<(Scalar, ProjectivePoint)>, ()> {
self.0.verify_share(verification_share, nonces, share)
}
}
}
#[cfg(feature = "std")]
pub use frost_crypto::*;

View File

@@ -1,7 +0,0 @@
# Solidity build outputs
cache
artifacts
# Auto-generated ABI files
src/abi/schnorr.rs
src/abi/router.rs

View File

@@ -1,45 +0,0 @@
[package]
name = "ethereum-serai"
version = "0.1.0"
description = "An Ethereum library supporting Schnorr signing and on-chain verification"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/ethereum"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Elizabeth Binks <elizabethjbinks@gmail.com>"]
edition = "2021"
publish = false
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "1", default-features = false }
eyre = { version = "0.6", default-features = false }
sha3 = { version = "0.10", default-features = false, features = ["std"] }
group = { version = "0.13", default-features = false }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["secp256k1", "tests"] }
ethers-core = { version = "2", default-features = false }
ethers-providers = { version = "2", default-features = false }
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
[build-dependencies]
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
serde = { version = "1", default-features = false, features = ["std"] }
serde_json = { version = "1", default-features = false, features = ["std"] }
sha2 = { version = "0.10", default-features = false, features = ["std"] }
tokio = { version = "1", features = ["macros"] }

View File

@@ -1,9 +0,0 @@
# Ethereum
This package contains Ethereum-related functionality, specifically deploying and
interacting with Serai contracts.
### Dependencies
- solc
- [Foundry](https://github.com/foundry-rs/foundry)

View File

@@ -1,42 +0,0 @@
use std::process::Command;
use ethers_contract::Abigen;
fn main() {
println!("cargo:rerun-if-changed=contracts/*");
println!("cargo:rerun-if-changed=artifacts/*");
for line in String::from_utf8(Command::new("solc").args(["--version"]).output().unwrap().stdout)
.unwrap()
.lines()
{
if let Some(version) = line.strip_prefix("Version: ") {
let version = version.split('+').next().unwrap();
assert_eq!(version, "0.8.25");
}
}
#[rustfmt::skip]
let args = [
"--base-path", ".",
"-o", "./artifacts", "--overwrite",
"--bin", "--abi",
"--optimize",
"./contracts/Schnorr.sol", "./contracts/Router.sol",
];
assert!(Command::new("solc").args(args).status().unwrap().success());
Abigen::new("Schnorr", "./artifacts/Schnorr.abi")
.unwrap()
.generate()
.unwrap()
.write_to_file("./src/abi/schnorr.rs")
.unwrap();
Abigen::new("Router", "./artifacts/Router.abi")
.unwrap()
.generate()
.unwrap()
.write_to_file("./src/abi/router.rs")
.unwrap();
}

View File

@@ -1,90 +0,0 @@
// SPDX-License-Identifier: AGPLv3
pragma solidity ^0.8.0;
import "./Schnorr.sol";
contract Router is Schnorr {
// Contract initializer
// TODO: Replace with a MuSig of the genesis validators
address public initializer;
// Nonce is incremented for each batch of transactions executed
uint256 public nonce;
// fixed parity for the public keys used in this contract
uint8 constant public KEY_PARITY = 27;
// current public key's x-coordinate
// note: this key must always use the fixed parity defined above
bytes32 public seraiKey;
struct OutInstruction {
address to;
uint256 value;
bytes data;
}
struct Signature {
bytes32 c;
bytes32 s;
}
// success is a uint256 representing a bitfield of transaction successes
event Executed(uint256 nonce, bytes32 batch, uint256 success);
// error types
error NotInitializer();
error AlreadyInitialized();
error InvalidKey();
error TooManyTransactions();
constructor() {
initializer = msg.sender;
}
// initSeraiKey can be called by the contract initializer to set the first
// public key, only if the public key has yet to be set.
function initSeraiKey(bytes32 _seraiKey) external {
if (msg.sender != initializer) revert NotInitializer();
if (seraiKey != 0) revert AlreadyInitialized();
if (_seraiKey == bytes32(0)) revert InvalidKey();
seraiKey = _seraiKey;
}
// updateSeraiKey validates the given Schnorr signature against the current public key,
// and if successful, updates the contract's public key to the given one.
function updateSeraiKey(
bytes32 _seraiKey,
Signature memory sig
) public {
if (_seraiKey == bytes32(0)) revert InvalidKey();
bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", _seraiKey));
if (!verify(KEY_PARITY, seraiKey, message, sig.c, sig.s)) revert InvalidSignature();
seraiKey = _seraiKey;
}
// execute accepts a list of transactions to execute as well as a Schnorr signature.
// if signature verification passes, the given transactions are executed.
// if signature verification fails, this function will revert.
function execute(
OutInstruction[] calldata transactions,
Signature memory sig
) public {
if (transactions.length > 256) revert TooManyTransactions();
bytes32 message = keccak256(abi.encode("execute", nonce, transactions));
// This prevents re-entrancy from causing double spends yet does allow
// out-of-order execution via re-entrancy
nonce++;
if (!verify(KEY_PARITY, seraiKey, message, sig.c, sig.s)) revert InvalidSignature();
uint256 successes;
for(uint256 i = 0; i < transactions.length; i++) {
(bool success, ) = transactions[i].to.call{value: transactions[i].value, gas: 200_000}(transactions[i].data);
assembly {
successes := or(successes, shl(i, success))
}
}
emit Executed(nonce, message, successes);
}
}

View File

@@ -1,39 +0,0 @@
// SPDX-License-Identifier: AGPLv3
pragma solidity ^0.8.0;
// see https://github.com/noot/schnorr-verify for implementation details
contract Schnorr {
// secp256k1 group order
uint256 constant public Q =
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141;
error InvalidSOrA();
error InvalidSignature();
// parity := public key y-coord parity (27 or 28)
// px := public key x-coord
// message := 32-byte hash of the message
// c := schnorr signature challenge
// s := schnorr signature
function verify(
uint8 parity,
bytes32 px,
bytes32 message,
bytes32 c,
bytes32 s
) public view returns (bool) {
// ecrecover = (m, v, r, s);
bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(px), Q));
bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(px), Q));
if (sa == 0) revert InvalidSOrA();
// the ecrecover precompile implementation checks that the `r` and `s`
// inputs are non-zero (in this case, `px` and `ca`), thus we don't need to
// check if they're zero.
address R = ecrecover(sa, parity, px, ca);
if (R == address(0)) revert InvalidSignature();
return c == keccak256(
abi.encodePacked(R, uint8(parity), px, block.chainid, message)
);
}
}

View File

@@ -1,6 +0,0 @@
#[rustfmt::skip]
#[allow(clippy::all)]
pub(crate) mod schnorr;
#[rustfmt::skip]
#[allow(clippy::all)]
pub(crate) mod router;

View File

@@ -1,91 +0,0 @@
use sha3::{Digest, Keccak256};
use group::ff::PrimeField;
use k256::{
elliptic_curve::{
bigint::ArrayEncoding, ops::Reduce, point::AffineCoordinates, sec1::ToEncodedPoint,
},
ProjectivePoint, Scalar, U256,
};
use frost::{
algorithm::{Hram, SchnorrSignature},
curve::Secp256k1,
};
pub(crate) fn keccak256(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
pub(crate) fn address(point: &ProjectivePoint) -> [u8; 20] {
let encoded_point = point.to_encoded_point(false);
// Last 20 bytes of the hash of the concatenated x and y coordinates
// We obtain the concatenated x and y coordinates via the uncompressed encoding of the point
keccak256(&encoded_point.as_ref()[1 .. 65])[12 ..].try_into().unwrap()
}
#[allow(non_snake_case)]
pub struct PublicKey {
pub A: ProjectivePoint,
pub px: Scalar,
pub parity: u8,
}
impl PublicKey {
#[allow(non_snake_case)]
pub fn new(A: ProjectivePoint) -> Option<PublicKey> {
let affine = A.to_affine();
let parity = u8::from(bool::from(affine.y_is_odd())) + 27;
if parity != 27 {
None?;
}
let x_coord = affine.x();
let x_coord_scalar = <Scalar as Reduce<U256>>::reduce_bytes(&x_coord);
// Return None if a reduction would occur
if x_coord_scalar.to_repr() != x_coord {
None?;
}
Some(PublicKey { A, px: x_coord_scalar, parity })
}
}
#[derive(Clone, Default)]
pub struct EthereumHram {}
impl Hram<Secp256k1> for EthereumHram {
#[allow(non_snake_case)]
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
let a_encoded_point = A.to_encoded_point(true);
let mut a_encoded = a_encoded_point.as_ref().to_owned();
a_encoded[0] += 25; // Ethereum uses 27/28 for point parity
assert!((a_encoded[0] == 27) || (a_encoded[0] == 28));
let mut data = address(R).to_vec();
data.append(&mut a_encoded);
data.extend(m);
Scalar::reduce(U256::from_be_slice(&keccak256(&data)))
}
}
pub struct Signature {
pub(crate) c: Scalar,
pub(crate) s: Scalar,
}
impl Signature {
pub fn new(
public_key: &PublicKey,
chain_id: U256,
m: &[u8],
signature: SchnorrSignature<Secp256k1>,
) -> Option<Signature> {
let c = EthereumHram::hram(
&signature.R,
&public_key.A,
&[chain_id.to_be_byte_array().as_slice(), &keccak256(m)].concat(),
);
if !signature.verify(public_key.A, c) {
None?;
}
Some(Signature { c, s: signature.s })
}
}

View File

@@ -1,16 +0,0 @@
use thiserror::Error;
pub mod crypto;
pub(crate) mod abi;
pub mod schnorr;
pub mod router;
#[cfg(test)]
mod tests;
#[derive(Error, Debug)]
pub enum Error {
#[error("failed to verify Schnorr signature")]
InvalidSignature,
}

View File

@@ -1,30 +0,0 @@
pub use crate::abi::router::*;
/*
use crate::crypto::{ProcessedSignature, PublicKey};
use ethers::{contract::ContractFactory, prelude::*, solc::artifacts::contract::ContractBytecode};
use eyre::Result;
use std::{convert::From, fs::File, sync::Arc};
pub async fn router_update_public_key<M: Middleware + 'static>(
contract: &Router<M>,
public_key: &PublicKey,
signature: &ProcessedSignature,
) -> std::result::Result<Option<TransactionReceipt>, eyre::ErrReport> {
let tx = contract.update_public_key(public_key.px.to_bytes().into(), signature.into());
let pending_tx = tx.send().await?;
let receipt = pending_tx.await?;
Ok(receipt)
}
pub async fn router_execute<M: Middleware + 'static>(
contract: &Router<M>,
txs: Vec<Rtransaction>,
signature: &ProcessedSignature,
) -> std::result::Result<Option<TransactionReceipt>, eyre::ErrReport> {
let tx = contract.execute(txs, signature.into()).send();
let pending_tx = tx.send().await?;
let receipt = pending_tx.await?;
Ok(receipt)
}
*/

View File

@@ -1,34 +0,0 @@
use eyre::{eyre, Result};
use group::ff::PrimeField;
use ethers_providers::{Provider, Http};
use crate::{
Error,
crypto::{keccak256, PublicKey, Signature},
};
pub use crate::abi::schnorr::*;
pub async fn call_verify(
contract: &Schnorr<Provider<Http>>,
public_key: &PublicKey,
message: &[u8],
signature: &Signature,
) -> Result<()> {
if contract
.verify(
public_key.parity,
public_key.px.to_repr().into(),
keccak256(message),
signature.c.to_repr().into(),
signature.s.to_repr().into(),
)
.call()
.await?
{
Ok(())
} else {
Err(eyre!(Error::InvalidSignature))
}
}

View File

@@ -1,132 +0,0 @@
use rand_core::OsRng;
use sha2::Sha256;
use sha3::{Digest, Keccak256};
use group::Group;
use k256::{
ecdsa::{hazmat::SignPrimitive, signature::DigestVerifier, SigningKey, VerifyingKey},
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, point::DecompressPoint},
U256, Scalar, AffinePoint, ProjectivePoint,
};
use frost::{
curve::Secp256k1,
algorithm::{Hram, IetfSchnorr},
tests::{algorithm_machines, sign},
};
use crate::{crypto::*, tests::key_gen};
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar::reduce(U256::from_be_slice(&keccak256(data)))
}
pub(crate) fn ecrecover(message: Scalar, v: u8, r: Scalar, s: Scalar) -> Option<[u8; 20]> {
if r.is_zero().into() || s.is_zero().into() || !((v == 27) || (v == 28)) {
return None;
}
#[allow(non_snake_case)]
let R = AffinePoint::decompress(&r.to_bytes(), (v - 27).into());
#[allow(non_snake_case)]
if let Some(R) = Option::<AffinePoint>::from(R) {
#[allow(non_snake_case)]
let R = ProjectivePoint::from(R);
let r = r.invert().unwrap();
let u1 = ProjectivePoint::GENERATOR * (-message * r);
let u2 = R * (s * r);
let key: ProjectivePoint = u1 + u2;
if !bool::from(key.is_identity()) {
return Some(address(&key));
}
}
None
}
#[test]
fn test_ecrecover() {
let private = SigningKey::random(&mut OsRng);
let public = VerifyingKey::from(&private);
// Sign the signature
const MESSAGE: &[u8] = b"Hello, World!";
let (sig, recovery_id) = private
.as_nonzero_scalar()
.try_sign_prehashed_rfc6979::<Sha256>(&Keccak256::digest(MESSAGE), b"")
.unwrap();
// Sanity check the signature verifies
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
{
assert_eq!(public.verify_digest(Keccak256::new_with_prefix(MESSAGE), &sig).unwrap(), ());
}
// Perform the ecrecover
assert_eq!(
ecrecover(
hash_to_scalar(MESSAGE),
u8::from(recovery_id.unwrap().is_y_odd()) + 27,
*sig.r(),
*sig.s()
)
.unwrap(),
address(&ProjectivePoint::from(public.as_affine()))
);
}
// Run the sign test with the EthereumHram
#[test]
fn test_signing() {
let (keys, _) = key_gen();
const MESSAGE: &[u8] = b"Hello, World!";
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let _sig =
sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE);
}
#[allow(non_snake_case)]
pub fn preprocess_signature_for_ecrecover(
R: ProjectivePoint,
public_key: &PublicKey,
chain_id: U256,
m: &[u8],
s: Scalar,
) -> (u8, Scalar, Scalar) {
let c = EthereumHram::hram(
&R,
&public_key.A,
&[chain_id.to_be_byte_array().as_slice(), &keccak256(m)].concat(),
);
let sa = -(s * public_key.px);
let ca = -(c * public_key.px);
(public_key.parity, sa, ca)
}
#[test]
fn test_ecrecover_hack() {
let (keys, public_key) = key_gen();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let chain_id = U256::ONE;
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, &keys),
full_message,
);
let (parity, sa, ca) =
preprocess_signature_for_ecrecover(sig.R, &public_key, chain_id, MESSAGE, sig.s);
let q = ecrecover(sa, parity, public_key.px, ca).unwrap();
assert_eq!(q, address(&sig.R));
}

View File

@@ -1,92 +0,0 @@
use std::{sync::Arc, time::Duration, fs::File, collections::HashMap};
use rand_core::OsRng;
use group::ff::PrimeField;
use k256::{Scalar, ProjectivePoint};
use frost::{curve::Secp256k1, Participant, ThresholdKeys, tests::key_gen as frost_key_gen};
use ethers_core::{
types::{H160, Signature as EthersSignature},
abi::Abi,
};
use ethers_contract::ContractFactory;
use ethers_providers::{Middleware, Provider, Http};
use crate::crypto::PublicKey;
mod crypto;
mod schnorr;
mod router;
pub fn key_gen() -> (HashMap<Participant, ThresholdKeys<Secp256k1>>, PublicKey) {
let mut keys = frost_key_gen::<_, Secp256k1>(&mut OsRng);
let mut group_key = keys[&Participant::new(1).unwrap()].group_key();
let mut offset = Scalar::ZERO;
while PublicKey::new(group_key).is_none() {
offset += Scalar::ONE;
group_key += ProjectivePoint::GENERATOR;
}
for keys in keys.values_mut() {
*keys = keys.offset(offset);
}
let public_key = PublicKey::new(group_key).unwrap();
(keys, public_key)
}
// TODO: Replace with a contract deployment from an unknown account, so the environment solely has
// to fund the deployer, not create/pass a wallet
// TODO: Deterministic deployments across chains
pub async fn deploy_contract(
chain_id: u32,
client: Arc<Provider<Http>>,
wallet: &k256::ecdsa::SigningKey,
name: &str,
) -> eyre::Result<H160> {
let abi: Abi =
serde_json::from_reader(File::open(format!("./artifacts/{name}.abi")).unwrap()).unwrap();
let hex_bin_buf = std::fs::read_to_string(format!("./artifacts/{name}.bin")).unwrap();
let hex_bin =
if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf };
let bin = hex::decode(hex_bin).unwrap();
let factory = ContractFactory::new(abi, bin.into(), client.clone());
let mut deployment_tx = factory.deploy(())?.tx;
deployment_tx.set_chain_id(chain_id);
deployment_tx.set_gas(1_000_000);
let (max_fee_per_gas, max_priority_fee_per_gas) = client.estimate_eip1559_fees(None).await?;
deployment_tx.as_eip1559_mut().unwrap().max_fee_per_gas = Some(max_fee_per_gas);
deployment_tx.as_eip1559_mut().unwrap().max_priority_fee_per_gas = Some(max_priority_fee_per_gas);
let sig_hash = deployment_tx.sighash();
let (sig, rid) = wallet.sign_prehash_recoverable(sig_hash.as_ref()).unwrap();
// EIP-155 v
let mut v = u64::from(rid.to_byte());
assert!((v == 0) || (v == 1));
v += u64::from((chain_id * 2) + 35);
let r = sig.r().to_repr();
let r_ref: &[u8] = r.as_ref();
let s = sig.s().to_repr();
let s_ref: &[u8] = s.as_ref();
let deployment_tx =
deployment_tx.rlp_signed(&EthersSignature { r: r_ref.into(), s: s_ref.into(), v });
let pending_tx = client.send_raw_transaction(deployment_tx).await?;
let mut receipt;
while {
receipt = client.get_transaction_receipt(pending_tx.tx_hash()).await?;
receipt.is_none()
} {
tokio::time::sleep(Duration::from_secs(6)).await;
}
let receipt = receipt.unwrap();
assert!(receipt.status == Some(1.into()));
Ok(receipt.contract_address.unwrap())
}

View File

@@ -1,109 +0,0 @@
use std::{convert::TryFrom, sync::Arc, collections::HashMap};
use rand_core::OsRng;
use group::ff::PrimeField;
use frost::{
curve::Secp256k1,
Participant, ThresholdKeys,
algorithm::IetfSchnorr,
tests::{algorithm_machines, sign},
};
use ethers_core::{
types::{H160, U256, Bytes},
abi::AbiEncode,
utils::{Anvil, AnvilInstance},
};
use ethers_providers::{Middleware, Provider, Http};
use crate::{
crypto::{keccak256, PublicKey, EthereumHram, Signature},
router::{self, *},
tests::{key_gen, deploy_contract},
};
async fn setup_test() -> (
u32,
AnvilInstance,
Router<Provider<Http>>,
HashMap<Participant, ThresholdKeys<Secp256k1>>,
PublicKey,
) {
let anvil = Anvil::new().spawn();
let provider = Provider::<Http>::try_from(anvil.endpoint()).unwrap();
let chain_id = provider.get_chainid().await.unwrap().as_u32();
let wallet = anvil.keys()[0].clone().into();
let client = Arc::new(provider);
let contract_address =
deploy_contract(chain_id, client.clone(), &wallet, "Router").await.unwrap();
let contract = Router::new(contract_address, client.clone());
let (keys, public_key) = key_gen();
// Set the key to the threshold keys
let tx = contract.init_serai_key(public_key.px.to_repr().into()).gas(100_000);
let pending_tx = tx.send().await.unwrap();
let receipt = pending_tx.await.unwrap().unwrap();
assert!(receipt.status == Some(1.into()));
(chain_id, anvil, contract, keys, public_key)
}
#[tokio::test]
async fn test_deploy_contract() {
setup_test().await;
}
pub fn hash_and_sign(
keys: &HashMap<Participant, ThresholdKeys<Secp256k1>>,
public_key: &PublicKey,
chain_id: U256,
message: &[u8],
) -> Signature {
let hashed_message = keccak256(message);
let mut chain_id_bytes = [0; 32];
chain_id.to_big_endian(&mut chain_id_bytes);
let full_message = &[chain_id_bytes.as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, keys),
full_message,
);
Signature::new(public_key, k256::U256::from_words(chain_id.0), message, sig).unwrap()
}
#[tokio::test]
async fn test_router_execute() {
let (chain_id, _anvil, contract, keys, public_key) = setup_test().await;
let to = H160([0u8; 20]);
let value = U256([0u64; 4]);
let data = Bytes::from([0]);
let tx = OutInstruction { to, value, data: data.clone() };
let nonce_call = contract.nonce();
let nonce = nonce_call.call().await.unwrap();
let encoded =
("execute".to_string(), nonce, vec![router::OutInstruction { to, value, data }]).encode();
let sig = hash_and_sign(&keys, &public_key, chain_id.into(), &encoded);
let tx = contract
.execute(vec![tx], router::Signature { c: sig.c.to_repr().into(), s: sig.s.to_repr().into() })
.gas(300_000);
let pending_tx = tx.send().await.unwrap();
let receipt = dbg!(pending_tx.await.unwrap().unwrap());
assert!(receipt.status == Some(1.into()));
println!("gas used: {:?}", receipt.cumulative_gas_used);
println!("logs: {:?}", receipt.logs);
}

View File

@@ -1,67 +0,0 @@
use std::{convert::TryFrom, sync::Arc};
use rand_core::OsRng;
use ::k256::{elliptic_curve::bigint::ArrayEncoding, U256, Scalar};
use ethers_core::utils::{keccak256, Anvil, AnvilInstance};
use ethers_providers::{Middleware, Provider, Http};
use frost::{
curve::Secp256k1,
algorithm::IetfSchnorr,
tests::{algorithm_machines, sign},
};
use crate::{
crypto::*,
schnorr::*,
tests::{key_gen, deploy_contract},
};
async fn setup_test() -> (u32, AnvilInstance, Schnorr<Provider<Http>>) {
let anvil = Anvil::new().spawn();
let provider = Provider::<Http>::try_from(anvil.endpoint()).unwrap();
let chain_id = provider.get_chainid().await.unwrap().as_u32();
let wallet = anvil.keys()[0].clone().into();
let client = Arc::new(provider);
let contract_address =
deploy_contract(chain_id, client.clone(), &wallet, "Schnorr").await.unwrap();
let contract = Schnorr::new(contract_address, client.clone());
(chain_id, anvil, contract)
}
#[tokio::test]
async fn test_deploy_contract() {
setup_test().await;
}
#[tokio::test]
async fn test_ecrecover_hack() {
let (chain_id, _anvil, contract) = setup_test().await;
let chain_id = U256::from(chain_id);
let (keys, public_key) = key_gen();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, &keys),
full_message,
);
let sig = Signature::new(&public_key, chain_id, MESSAGE, sig).unwrap();
call_verify(&contract, &public_key, MESSAGE, &sig).await.unwrap();
// Test an invalid signature fails
let mut sig = sig;
sig.s += Scalar::ONE;
assert!(call_verify(&contract, &public_key, MESSAGE, &sig).await.is_err());
}

View File

@@ -1,113 +0,0 @@
[package]
name = "monero-serai"
version = "0.1.4-alpha"
description = "A modern Monero transaction library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
std-shims = { path = "../../common/std-shims", version = "^0.1.1", default-features = false }
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", default-features = false, optional = true }
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
subtle = { version = "^2.4", default-features = false }
rand_core = { version = "0.6", default-features = false }
# Used to send transactions
rand = { version = "0.8", default-features = false }
rand_chacha = { version = "0.3", default-features = false }
# Used to select decoys
rand_distr = { version = "0.4", default-features = false }
sha3 = { version = "0.10", default-features = false }
pbkdf2 = { version = "0.12", features = ["simple"], default-features = false }
curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize", "precomputed-tables"] }
# Used for the hash to curve, along with the more complicated proofs
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
multiexp = { path = "../../crypto/multiexp", version = "0.4", default-features = false, features = ["batch"] }
# Needed for multisig
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.4", default-features = false, features = ["serialize"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.8", default-features = false, features = ["ed25519"], optional = true }
monero-generators = { path = "generators", version = "0.4", default-features = false }
async-lock = { version = "3", default-features = false, optional = true }
hex-literal = "0.4"
hex = { version = "0.4", default-features = false, features = ["alloc"] }
serde = { version = "1", default-features = false, features = ["derive", "alloc"] }
serde_json = { version = "1", default-features = false, features = ["alloc"] }
base58-monero = { version = "2", default-features = false, features = ["check"] }
# Used for the provided HTTP RPC
digest_auth = { version = "0.3", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.1", default-features = false, features = ["tls"], optional = true }
tokio = { version = "1", default-features = false, optional = true }
[build-dependencies]
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
monero-generators = { path = "generators", version = "0.4", default-features = false }
[dev-dependencies]
tokio = { version = "1", features = ["sync", "macros"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["tests"] }
[features]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"subtle/std",
"rand_core/std",
"rand/std",
"rand_chacha/std",
"rand_distr/std",
"sha3/std",
"pbkdf2/std",
"multiexp/std",
"transcript/std",
"dleq/std",
"monero-generators/std",
"async-lock?/std",
"hex/std",
"serde/std",
"serde_json/std",
"base58-monero/std",
]
cache-distribution = ["async-lock"]
http-rpc = ["digest_auth", "simple-request", "tokio"]
multisig = ["transcript", "frost", "dleq", "std"]
binaries = ["tokio/rt-multi-thread", "tokio/macros", "http-rpc"]
experimental = []
default = ["std", "http-rpc"]

View File

@@ -1,49 +0,0 @@
# monero-serai
A modern Monero transaction library intended for usage in wallets. It prides
itself on accuracy, correctness, and removing common pit falls developers may
face.
monero-serai also offers the following features:
- Featured Addresses
- A FROST-based multisig orders of magnitude more performant than Monero's
### Purpose and support
monero-serai was written for Serai, a decentralized exchange aiming to support
Monero. Despite this, monero-serai is intended to be a widely usable library,
accurate to Monero. monero-serai guarantees the functionality needed for Serai,
yet will not deprive functionality from other users.
Various legacy transaction formats are not currently implemented, yet we are
willing to add support for them. There aren't active development efforts around
them however.
### Caveats
This library DOES attempt to do the following:
- Create on-chain transactions identical to how wallet2 would (unless told not
to)
- Not be detectable as monero-serai when scanning outputs
- Not reveal spent outputs to the connected RPC node
This library DOES NOT attempt to do the following:
- Have identical RPC behavior when creating transactions
- Be a wallet
This means that monero-serai shouldn't be fingerprintable on-chain. It also
shouldn't be fingerprintable if a targeted attack occurs to detect if the
receiving wallet is monero-serai or wallet2. It also should be generally safe
for usage with remote nodes.
It won't hide from remote nodes it's monero-serai however, potentially
allowing a remote node to profile you. The implications of this are left to the
user to consider.
It also won't act as a wallet, just as a transaction library. wallet2 has
several *non-transaction-level* policies, such as always attempting to use two
inputs to create transactions. These are considered out of scope to
monero-serai.

View File

@@ -1,67 +0,0 @@
use std::{
io::Write,
env,
path::Path,
fs::{File, remove_file},
};
use dalek_ff_group::EdwardsPoint;
use monero_generators::bulletproofs_generators;
fn serialize(generators_string: &mut String, points: &[EdwardsPoint]) {
for generator in points {
generators_string.extend(
format!(
"
dalek_ff_group::EdwardsPoint(
curve25519_dalek::edwards::CompressedEdwardsY({:?}).decompress().unwrap()
),
",
generator.compress().to_bytes()
)
.chars(),
);
}
}
fn generators(prefix: &'static str, path: &str) {
let generators = bulletproofs_generators(prefix.as_bytes());
#[allow(non_snake_case)]
let mut G_str = String::new();
serialize(&mut G_str, &generators.G);
#[allow(non_snake_case)]
let mut H_str = String::new();
serialize(&mut H_str, &generators.H);
let path = Path::new(&env::var("OUT_DIR").unwrap()).join(path);
let _ = remove_file(&path);
File::create(&path)
.unwrap()
.write_all(
format!(
"
pub(crate) static GENERATORS_CELL: OnceLock<Generators> = OnceLock::new();
pub fn GENERATORS() -> &'static Generators {{
GENERATORS_CELL.get_or_init(|| Generators {{
G: vec![
{G_str}
],
H: vec![
{H_str}
],
}})
}}
",
)
.as_bytes(),
)
.unwrap();
}
fn main() {
println!("cargo:rerun-if-changed=build.rs");
generators("bulletproof", "generators.rs");
generators("bulletproof_plus", "generators_plus.rs");
}

View File

@@ -1,34 +0,0 @@
[package]
name = "monero-generators"
version = "0.4.0"
description = "Monero's hash_to_point and generators"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero/generators"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false }
subtle = { version = "^2.4", default-features = false }
sha3 = { version = "0.10", default-features = false }
curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize", "precomputed-tables"] }
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../../crypto/dalek-ff-group", version = "0.4", default-features = false }
[dev-dependencies]
hex = "0.4"
[features]
std = ["std-shims/std", "subtle/std", "sha3/std", "dalek-ff-group/std"]
default = ["std"]

View File

@@ -1,7 +0,0 @@
# Monero Generators
Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
`hash_to_point` here, is included, as needed to generate generators.
This library is usable under no-std when the `std` feature is disabled.

View File

@@ -1,60 +0,0 @@
use subtle::ConditionallySelectable;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use group::ff::{Field, PrimeField};
use dalek_ff_group::FieldElement;
use crate::hash;
/// Decompress canonically encoded ed25519 point
/// It does not check if the point is in the prime order subgroup
pub fn decompress_point(bytes: [u8; 32]) -> Option<EdwardsPoint> {
CompressedEdwardsY(bytes)
.decompress()
// Ban points which are either unreduced or -0
.filter(|point| point.compress().to_bytes() == bytes)
}
/// Monero's hash to point function, as named `hash_to_ec`.
pub fn hash_to_point(bytes: [u8; 32]) -> EdwardsPoint {
#[allow(non_snake_case)]
let A = FieldElement::from(486662u64);
let v = FieldElement::from_square(hash(&bytes)).double();
let w = v + FieldElement::ONE;
let x = w.square() + (-A.square() * v);
// This isn't the complete X, yet its initial value
// We don't calculate the full X, and instead solely calculate Y, letting dalek reconstruct X
// While inefficient, it solves API boundaries and reduces the amount of work done here
#[allow(non_snake_case)]
let X = {
let u = w;
let v = x;
let v3 = v * v * v;
let uv3 = u * v3;
let v7 = v3 * v3 * v;
let uv7 = u * v7;
uv3 * uv7.pow((-FieldElement::from(5u8)) * FieldElement::from(8u8).invert().unwrap())
};
let x = X.square() * x;
let y = w - x;
let non_zero_0 = !y.is_zero();
let y_if_non_zero_0 = w + x;
let sign = non_zero_0 & (!y_if_non_zero_0.is_zero());
let mut z = -A;
z *= FieldElement::conditional_select(&v, &FieldElement::from(1u8), sign);
#[allow(non_snake_case)]
let Z = z + w;
#[allow(non_snake_case)]
let mut Y = z - w;
Y *= Z.invert().unwrap();
let mut bytes = Y.to_repr();
bytes[31] |= sign.unwrap_u8() << 7;
decompress_point(bytes).unwrap().mul_by_cofactor()
}

View File

@@ -1,79 +0,0 @@
//! Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
//!
//! An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
//! `hash_to_point` here, is included, as needed to generate generators.
#![cfg_attr(not(feature = "std"), no_std)]
use std_shims::{sync::OnceLock, vec::Vec};
use sha3::{Digest, Keccak256};
use curve25519_dalek::edwards::{EdwardsPoint as DalekPoint};
use group::{Group, GroupEncoding};
use dalek_ff_group::EdwardsPoint;
mod varint;
use varint::write_varint;
mod hash_to_point;
pub use hash_to_point::{hash_to_point, decompress_point};
#[cfg(test)]
mod tests;
fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
static H_CELL: OnceLock<DalekPoint> = OnceLock::new();
/// Monero's alternate generator `H`, used for amounts in Pedersen commitments.
#[allow(non_snake_case)]
pub fn H() -> DalekPoint {
*H_CELL.get_or_init(|| {
decompress_point(hash(&EdwardsPoint::generator().to_bytes())).unwrap().mul_by_cofactor()
})
}
static H_POW_2_CELL: OnceLock<[DalekPoint; 64]> = OnceLock::new();
/// Monero's alternate generator `H`, multiplied by 2**i for i in 1 ..= 64.
#[allow(non_snake_case)]
pub fn H_pow_2() -> &'static [DalekPoint; 64] {
H_POW_2_CELL.get_or_init(|| {
let mut res = [H(); 64];
for i in 1 .. 64 {
res[i] = res[i - 1] + res[i - 1];
}
res
})
}
const MAX_M: usize = 16;
const N: usize = 64;
const MAX_MN: usize = MAX_M * N;
/// Container struct for Bulletproofs(+) generators.
#[allow(non_snake_case)]
pub struct Generators {
pub G: Vec<EdwardsPoint>,
pub H: Vec<EdwardsPoint>,
}
/// Generate generators as needed for Bulletproofs(+), as Monero does.
pub fn bulletproofs_generators(dst: &'static [u8]) -> Generators {
let mut res = Generators { G: Vec::with_capacity(MAX_MN), H: Vec::with_capacity(MAX_MN) };
for i in 0 .. MAX_MN {
let i = 2 * i;
let mut even = H().compress().to_bytes().to_vec();
even.extend(dst);
let mut odd = even.clone();
write_varint(&i.try_into().unwrap(), &mut even).unwrap();
write_varint(&(i + 1).try_into().unwrap(), &mut odd).unwrap();
res.H.push(EdwardsPoint(hash_to_point(hash(&even))));
res.G.push(EdwardsPoint(hash_to_point(hash(&odd))));
}
res
}

View File

@@ -1,38 +0,0 @@
use crate::{decompress_point, hash_to_point};
#[test]
fn crypto_tests() {
// tests.txt file copied from monero repo
// https://github.com/monero-project/monero/
// blob/ac02af92867590ca80b2779a7bbeafa99ff94dcb/tests/crypto/tests.txt
let reader = include_str!("./tests.txt");
for line in reader.lines() {
let mut words = line.split_whitespace();
let command = words.next().unwrap();
match command {
"check_key" => {
let key = words.next().unwrap();
let expected = match words.next().unwrap() {
"true" => true,
"false" => false,
_ => unreachable!("invalid result"),
};
let actual = decompress_point(hex::decode(key).unwrap().try_into().unwrap());
assert_eq!(actual.is_some(), expected);
}
"hash_to_ec" => {
let bytes = words.next().unwrap();
let expected = words.next().unwrap();
let actual = hash_to_point(hex::decode(bytes).unwrap().try_into().unwrap());
assert_eq!(hex::encode(actual.compress().to_bytes()), expected);
}
_ => unreachable!("unknown command"),
}
}
}

View File

@@ -1 +0,0 @@
mod hash_to_point;

View File

@@ -1,628 +0,0 @@
check_key c2cb3cf3840aa9893e00ec77093d3d44dba7da840b51c48462072d58d8efd183 false
check_key bd85a61bae0c101d826cbed54b1290f941d26e70607a07fc6f0ad611eb8f70a6 true
check_key 328f81cad4eba24ab2bad7c0e56b1e2e7346e625bcb06ae649aef3ffa0b8bef3 false
check_key 6016a5463b9e5a58c3410d3f892b76278883473c3f0b69459172d3de49e85abe true
check_key 4c71282b2add07cdc6898a2622553f1ca4eb851e5cb121181628be5f3814c5b1 false
check_key 69393c25c3b50e177f81f20f852dd604e768eb30052e23108b3cfa1a73f2736e true
check_key 3d5a89b676cb84c2be3428d20a660dc6a37cae13912e127888a5132e8bac2163 true
check_key 78cd665deb28cebc6208f307734c56fccdf5fa7e2933fadfcdd2b6246e9ae95c false
check_key e03b2414e260580f86ee294cd4c636a5b153e617f704e81dad248fbf715b2ee4 true
check_key 28c3503ce82d7cdc8e0d96c4553bcf0352bbcfc73925495dbe541e7e1df105fc false
check_key 06855c3c3e0d03fec354059bda319b39916bdc10b6581e3f41b335ee7b014fd5 false
check_key 556381485df0d7d5a268ab5ecfb2984b060acc63471183fcf538bf273b0c0cb5 true
check_key c7f76d82ac64b1e7fdc32761ff00d6f0f7ada4cf223aa5a11187e3a02e1d5319 true
check_key cfa85d8bdb6f633fcf031adee3a299ac42eeb6bd707744049f652f6322f5aa47 true
check_key 91e9b63ced2b08979fee713365464cc3417c4f238f9bdd3396efbb3c58e195ee true
check_key 7b56e76fe94bd30b3b2f2c4ba5fe4c504821753a8965eb1cbcf8896e2d6aba19 true
check_key 7338df494bc416cf5edcc02069e067f39cb269ce67bd9faba956021ce3b3de3a false
check_key f9a1f27b1618342a558379f4815fa5039a8fe9d98a09f45c1af857ba99231dc1 false
check_key b2a1f37718180d4448a7fcb5f788048b1a7132dde1cfd25f0b9b01776a21c687 true
check_key 0d3a0f9443a8b24510ad1e76a8117cca03bce416edfe35e3c2a2c2712454f8dc false
check_key d8d3d806a76f120c4027dc9c9d741ad32e06861b9cfbc4ce39289c04e251bb3c false
check_key 1e9e3ba7bc536cd113606842835d1f05b4b9e65875742f3a35bfb2d63164b5d5 true
check_key 5c52d0087997a2cdf1d01ed0560d94b4bfd328cb741cb9a8d46ff50374b35a57 true
check_key bb669d4d7ffc4b91a14defedcdbd96b330108b01adc63aa685e2165284c0033b false
check_key d2709ae751a0a6fd796c98456fa95a7b64b75a3434f1caa3496eeaf5c14109b4 true
check_key e0c238cba781684e655b10a7d4af04ab7ff2e7022182d7ed2279d6adf36b3e7a false
check_key 34ebb4bf871572cee5c6935716fab8c8ec28feef4f039763d8f039b84a50bf4c false
check_key 4730d4f38ec3f3b83e32e6335d2506df4ee39858848842c5a0184417fcc639e4 true
check_key d42cf7fdf5e17e0a8a7f88505a2b7a3d297113bd93d3c20fa87e11509ec905a2 true
check_key b757c95059cefabb0080d3a8ebca82e46efecfd29881be3121857f9d915e388c false
check_key bbe777aaf04d02b96c0632f4b1c6f35f1c7bcbc5f22af192f92c077709a2b50b false
check_key 73518522aabd28566f858c33fccb34b7a4de0e283f6f783f625604ee647afad9 true
check_key f230622c4a8f6e516590466bd10f86b64fbef61695f6a054d37604e0b024d5af false
check_key bc6b9a8379fd6c369f7c3bd9ddce58db6b78f27a41d798bb865c3920824d0943 false
check_key 45a4f87c25898cd6be105fa1602b85c4d862782adaac8b85c996c4a2bcd8af47 true
check_key eb4ad3561d21c4311affbd7cc2c7ff5fd509f72f88ba67dc097a75c31fdbd990 false
check_key 2f34f4630c09a23b7ecc19f02b4190a26df69e07e13de8069ae5ff80d23762fc true
check_key 2ea4e4fb5085eb5c8adee0d5ab7d35c67d74d343bd816cd13924536cffc2527c true
check_key 5d35467ee6705a0d35818aa9ae94e4603c3e5500bfc4cf4c4f77a7160a597aa6 true
check_key 8ff42bc76796e20c99b6e879369bd4b46a256db1366416291de9166e39d5a093 true
check_key 0262ba718850df6c621e8a24cd9e4831c047e38818a89e15c7a06a489a4558e1 false
check_key 58b29b2ba238b534b08fb46f05f430e61cb77dc251b0bb50afec1b6061fd9247 false
check_key 153170e3dc2b0e1b368fc0d0e31053e872f094cdace9a2846367f0d9245a109b false
check_key 40419d309d07522d493bb047ca9b5fb6c401aae226eefae6fd395f5bb9114200 true
check_key 713068818d256ef69c78cd6082492013fbd48de3c9e7e076415dd0a692994504 true
check_key a7218ee08e50781b0c87312d5e0031467e863c10081668e3792d96cbcee4e474 true
check_key 356ce516b00e674ef1729c75b0a68090e7265cef675bbf32bf809495b67e9342 false
check_key 52a5c053293675e3efd2c585047002ea6d77931cbf38f541b9070d319dc0d237 false
check_key 77c0080bf157e069b18c4c604cc9505c5ec6f0f9930e087592d70507ca1b5534 false
check_key e733bc41f880a4cfb1ca6f397916504130807289cacfca10b15f5b8d058ed1bf false
check_key c4f1d3c884908a574ecea8be10e02277de35ef84a1d10f105f2be996f285161f true
check_key aed677f7f69e146aa0863606ac580fc0bbdc22a88c4b4386abaa4bdfff66bcc9 false
check_key 6ad0edf59769599af8caa986f502afc67aecbebb8107aaf5e7d3ae51d5cf8dd8 false
check_key 64a0a70e99be1f775c222ee9cd6f1bee6f632cb9417899af398ff9aff70661c6 true
check_key c63afaa03bb5c4ed7bc77aac175dbfb73f904440b2e3056a65850ac1bd261332 false
check_key a4e89cd2471c26951513b1cfbdcf053a86575e095af52495276aa56ede8ce344 false
check_key 2ce935d97f7c3ddb973de685d20f58ee39938fe557216328045ec2b83f3132be true
check_key 3e3d38b1fca93c1559ac030d586616354c668aa76245a09e3fa6de55ac730973 true
check_key 8b81b9681f76a4254007fd07ed1ded25fc675973ccb23afd06074805194733a4 false
check_key 26d1c15dfc371489439e29bcef2afcf7ed01fac24960fdc2e7c20847a8067588 true
check_key 85c1199b5a4591fc4cc36d23660648c1b9cfbb0e9c47199fa3eea33299a3dcec false
check_key 60830ba5449c1f04ac54675dfc7cac7510106c4b7549852551f8fe65971123e2 false
check_key 3e43c28c024597b3b836e4bc16905047cbf6e841b80e0b8cd6a325049070c2a5 false
check_key 474792c16a0032343a6f28f4cb564747c3b1ea0b6a6b9a42f7c71d7cc3dd3b44 true
check_key c8ec5e67cb5786673085191881950a3ca20dde88f46851b01dd91c695cfbad16 true
check_key 861c4b24b24a87b8559e0bb665f84dcc506c147a909f335ae4573b92299f042f false
check_key 2c9e0fe3e4983d79f86c8c36928528f1bc90d94352ce427032cdef6906d84d0b true
check_key 9293742822c2dff63fdc1bf6645c864fd527cea2ddba6d4f3048d202fc340c9a true
check_key 3956422ad380ef19cb9fe360ef09cc7aaec7163eea4114392a7a0b2e2671914e true
check_key 5ae8e72cadda85e525922fec11bd53a261cf26ee230fe85a1187f831b1b2c258 false
check_key 973feca43a0baf450c30ace5dc19015e19400f0898316e28d9f3c631da31f99a true
check_key dd946c91a2077f45c5c16939e53859d9beabaf065e7b1b993d5e5cd385f8716e true
check_key b3928f2d67e47f6bd6da81f72e64908d8ff391af5689f0202c4c6fec7666ffe8 true
check_key 313382e82083697d7f9d256c3b3800b099b56c3ef33cacdccbd40a65622e25fc false
check_key 7d65380c12144802d39ed9306eed79fe165854273700437c0b4b50559800c058 true
check_key 4db5c20a49422fd27739c9ca80e2271a8a125dfcead22cb8f035d0e1b7b163be true
check_key dd76a9f565ef0e44d1531349ec4c5f7c3c387c2f5823e693b4952f4b0b70808c true
check_key 66430bf628eae23918c3ed17b42138db1f98c24819e55fc4a07452d0c85603eb true
check_key 9f0b677830c3f089c27daf724bb10be848537f8285de83ab0292d35afb617f77 false
check_key cbf98287391fb00b1e68ad64e9fb10198025864c099b8b9334d840457e673874 true
check_key a42552e9446e49a83aed9e3370506671216b2d1471392293b8fc2b81c81a73ee false
check_key fb3de55ac81a923d506a514602d65d004ec9d13e8b47e82d73af06da73006673 false
check_key e17abb78e58a4b72ff4ad7387b290f2811be880b394b8bcaae7748ac09930169 false
check_key 9ffbda7ace69753761cdb5eb01f75433efa5cdb6a4f1b664874182c6a95adcba true
check_key 507123c979179ea0a3f7f67fb485f71c8636ec4ec70aa47b92f3c707e7541a54 false
check_key f1d0b156571994ef578c61cb6545d34f834eb30e4357539a5633c862d4dffa91 false
check_key 3de62311ec14f9ee95828c190b2dc3f03059d6119e8dfccb7323efc640e07c75 false
check_key 5e50bb48bc9f6dd11d52c1f0d10d8ae5674d7a4af89cbbce178dafc8a562e5fe false
check_key 20b2c16497be101995391ceefb979814b0ea76f1ed5b6987985bcdcd17b36a81 false
check_key d63bff73b914ce791c840e99bfae0d47afdb99c2375e33c8f149d0df03d97873 false
check_key 3f24b3d94b5ddd244e4c4e67a6d9f533f0396ca30454aa0ca799f21328b81d47 true
check_key 6a44c016f09225a6d2e830290719d33eb29b53b553eea7737ed3a6e297b2e7d2 true
check_key ff0f34df0c76c207b8340be2009db72f730c69c2bbfeea2013105eaccf1d1f8e true
check_key 4baf559869fe4e915e219c3c8d9a2330fc91e542a5a2a7311d4d59fee996f807 true
check_key 1632207dfef26e97d13b0d0035ea9468fc5a8a89b0990fce77bb143c9d7f3b67 true
check_key fcb3dee3993d1a47630f29410903dd03706bd5e81c5802e6f1b9095cbdb404d3 true
check_key fb527092b9809e3d27d7588c7ef89915a769b99c1e03e7f72bbead9ed837daae false
check_key 902b118d27d40ab9cbd55edd375801ce302cdb59e09c8659a3ea1401918d8bba false
check_key 4d6fbf25ca51e263a700f1abf84f758dde3d11b632e908b3093d64fe2e70ea0a true
check_key f4c3211ec70affc1c9a94a6589460ee8360dad5f8c679152f16994038532e3fc true
check_key c2b3d73ac14956d7fdf12fa92235af1bb09e1566a6a6ffd0025682c750abdd69 false
check_key b7e68c12207d2e2104fb2ca224829b6fccc1c0e2154e8a931e3c837a945f4430 false
check_key 56ca0ca227708f1099bda1463db9559541c8c11ffad7b3d95c717471f25a01bf true
check_key 3eef3a46833e4d851671182a682e344e36bea7211a001f3b8af1093a9c83f1b2 true
check_key bd1f4a4f26cab7c1cbc0e17049b90854d6d28d2d55181e1b5f7a8045fcdfa06e true
check_key 8537b01c87e7c184d9555e8d93363dcd9b60a8acc94cd3e41eb7525fd3e1d35a false
check_key 68ace49179d549bad391d98ab2cc8afee65f98ce14955c3c1b16e850fabec231 true
check_key f9922f8a660e7c3e4f3735a817d18b72f59166a0be2d99795f953cf233a27e24 true
check_key 036b6be3da26e80508d5a5a6a5999a1fe0db1ac4e9ade8f1ea2eaf2ea9b1a70e true
check_key 5e595e886ce16b5ea31f53bcb619f16c8437276618c595739fece6339731feb0 false
check_key 4ee2cebae3476ed2eeb7efef9d20958538b3642f938403302682a04115c0f8ed false
check_key 519eedbd0da8676063ce7d5a605b3fc27afeecded857afa24b894ad248c87b5d false
check_key ce2b627c0accf4a3105796680c37792b30c6337d2d4fea11678282455ff82ff7 false
check_key aa26ed99071a8416215e8e7ded784aa7c2b303aab67e66f7539905d7e922eb4d false
check_key 435ae49c9ca26758aa103bdcca8d51393b1906fe27a61c5245361e554f335ec2 true
check_key 42568af395bd30024f6ccc95205c0e11a6ad1a7ee100f0ec46fcdf0af88e91fb false
check_key 0b4a78d1fde56181445f04ca4780f0725daa9c375b496fab6c037d6b2c2275db true
check_key 2f82d2a3c8ce801e1ad334f9e074a4fbf76ffac4080a7331dc1359c2b4f674a4 false
check_key 24297d8832d733ed052dd102d4c40e813f702006f325644ccf0cb2c31f77953f false
check_key 5231a53f6bea7c75b273bde4a9f673044ed87796f20e0909978f29d98fc8d4f0 true
check_key 94b5affcf78be5cf62765c32a0794bc06b4900e8a47ddba0e166ec20cec05935 true
check_key c14b4d846ea52ffbbb36aa62f059453af3cfae306280dada185d2d385ef8f317 true
check_key cceb34fddf01a6182deb79c6000a998742d4800d23d1d8472e3f43cd61f94508 true
check_key 1faffa33407fba1634d4136cf9447896776c16293b033c6794f06774b514744c true
check_key faaac98f644a2b77fb09ba0ebf5fcddf3ff55f6604c0e9e77f0278063e25113a true
check_key 09e8525b00bea395978279ca979247a76f38f86dce4465eb76c140a7f904c109 true
check_key 2d797fc725e7fb6d3b412694e7386040effe4823cdf01f6ec7edea4bc0e77e20 false
check_key bbb74dabee651a65f46bca472df6a8a749cc4ba5ca35078df5f6d27a772f922a false
check_key 77513ca00f3866607c3eff5c2c011beffa775c0022c5a4e7de1120a27e6687fd true
check_key 10064c14ace2a998fc2843eeeb62884fe3f7ab331ca70613d6a978f44d9868eb false
check_key 026ae84beb5e54c62629a7b63702e85044e38cadfc9a1fcabee6099ba185005c false
check_key aef91536292b7ba34a3e787fb019523c2fa7a0d56fca069cc82ccb6b02a45b14 false
check_key 147bb1a82c623c722540feaad82b7adf4b85c6ec0cbcef3ca52906f3e85617ac true
check_key fc9fb281a0847d58dc9340ef35ef02f7d20671142f12bdd1bfb324ab61d03911 false
check_key b739801b9455ac617ca4a7190e2806669f638d4b2f9288171afb55e1542c8d71 false
check_key 494cc1e2ee997eb1eb051f83c4c89968116714ddf74e460d4fa1c6e7c72e3eb3 true
check_key ed2fbdf2b727ed9284db90ec900a942224787a880bc41d95c4bc4cf136260fd7 true
check_key 02843d3e6fc6835ad03983670a592361a26948eb3e31648d572416a944d4909e true
check_key c14fea556a7e1b6b6c3d4e2e38a4e7e95d834220ff0140d3f7f561a34e460801 true
check_key 5f8f82a35452d0b0d09ffb40a1154641916c31e161ad1a6ab8cfddc2004efdf6 false
check_key 7b93d72429fab07b49956007eba335bb8c5629fbf9e7a601eaa030f196934a56 true
check_key 6a63ed96d2e46c2874beaf82344065d94b1e5c04406997f94caf4ccd97cfbab9 false
check_key c915f409e1e0f776d1f440aa6969cfec97559ef864b07d8c0d7c1163871b4603 true
check_key d06bc33630fc94303c2c369481308f805f5ce53c40141160aa4a1f072967617e false
check_key 1aafb14ca15043c2589bcd32c7c5f29479216a1980e127e9536729faf1c40266 true
check_key 58c115624a20f4b0c152ccd048c54a28a938556863ab8521b154d3165d3649cd false
check_key 9001ba086e8aa8a67e128f36d700cc641071556306db7ec9b8ac12a6256b27b7 false
check_key 898c468541634fb0def11f82c781341fce0def7b15695af4e642e397218c730c true
check_key 47ea6539e65b7b611b0e1ae9ee170adf7c31581ca9f78796d8ebbcc5cd74b712 false
check_key 0c60952a64eeac446652f5d3c136fd36966cf66310c15ee6ab2ecbf981461257 false
check_key 682264c4686dc7736b6e46bdc8ab231239bc5dac3f5cb9681a1e97a527945e8e true
check_key 276006845ca0ea4238b231434e20ad8b8b2a36876effbe1d1e3ffb1f14973397 true
check_key eecd3a49e55e32446f86c045dce123ef6fe2e5c57db1d850644b3c56ec689fce true
check_key a4dced63589118db3d5aebf6b5670e71250f07485ca4bb6dddf9cce3e4c227a1 false
check_key b8ade608ba43d55db7ab481da88b74a9be513fca651c03e04d30cc79f50e0276 false
check_key 0d91de88d007a03fe782f904808b036ff63dec6b73ce080c55231afd4ed261c3 true
check_key 87c59becb52dd16501edadbb0e06b0406d69541c4d46115351e79951a8dd9c28 true
check_key 9aee723be2265171fe10a86d1d3e9cf5a4e46178e859db83f86d1c6db104a247 false
check_key 509d34ae5bf56db011845b8cdf0cc7729ed602fce765e9564cb433b4d4421a43 false
check_key 06e766d9a6640558767c2aab29f73199130bfdc07fd858a73e6ae8e7b7ba23ba false
check_key 801c4fe5ab3e7cf13f7aa2ca3bc57cc8eba587d21f8bc4cd40b1e98db7aec8d9 false
check_key d85ad63aeb7d2faa22e5c9b87cd27f45b01e6d0fdc4c3ddf105584ac0a021465 false
check_key a7ca13051eb2baeb5befa5e236e482e0bb71803ad06a6eae3ae48742393329d2 true
check_key 5a9ba3ec20f116173d933bf5cf35c320ed3751432f3ab453e4a6c51c1d243257 false
check_key a4091add8a6710c03285a422d6e67863a48b818f61c62e989b1e9b2ace240a87 false
check_key bdee0c6442e6808f25bb18e21b19032cf93a55a5f5c6426fba2227a41c748684 true
check_key d4aeb6cdad9667ec3b65c7fbc5bfd1b82bba1939c6bb448a86e40aec42be5f25 false
check_key 73525b30a77f1212f7e339ec11f48c453e476f3669e6e70bebabc2fe9e37c160 true
check_key 45501f2dc4d0a3131f9e0fe37a51c14869ab610abd8bf0158111617924953629 false
check_key 07d0e4c592aa3676adf81cca31a95d50c8c269d995a78cde27b2a9a7a93083a6 false
check_key a1797d6178c18add443d22fdbf45ca5e49ead2f78b70bdf1500f570ee90adca5 true
check_key 0961e82e6e7855d7b7bf96777e14ae729f91c5bbd20f805bd7daac5ccbec4bab false
check_key 57f5ba0ad36e997a4fb585cd2fc81b9cc5418db702c4d1e366639bb432d37c73 true
check_key 82b005be61580856841e042ee8be74ae4ca66bb6733478e81ca1e56213de5c05 false
check_key d7733dcae1874c93e9a2bd46385f720801f913744d60479930dad7d56c767cdc false
check_key b8b8b698609ac3f1bd8f4965151b43b362e6c5e3d1c1feae312c1d43976d59ab true
check_key 4bba7815a9a1b86a5b80b17ac0b514e2faa7a24024f269b330e5b7032ae8c04e true
check_key 0f70da8f8266b58acda259935ef1a947c923f8698622c5503520ff31162e877b false
check_key 233eaa3db80f314c6c895d1328a658a9175158fa2483ed216670c288a04b27bc false
check_key a889f124fabfd7a1e2d176f485be0cbd8b3eeaafeee4f40e99e2a56befb665be true
check_key 2b7b8abc198b11cf7efa21bc63ec436f790fe1f9b8c044440f183ab291af61d6 true
check_key 2491804714f7938cf501fb2adf07597b4899b919cabbaab49518b8f8767fdc6a true
check_key 52744a54fcb00dc930a5d7c2bc866cbfc1e75dd38b38021fd792bb0ca9f43164 true
check_key e42cbf70b81ba318419104dffbb0cdc3b7e7d4698e422206b753a4e2e6fc69bb false
check_key 2faff73e4fed62965f3dbf2e6446b5fea0364666cc8c9450b6ed63bbb6f5f0e7 true
check_key 8b963928d75be661c3c18ddd4f4d1f37ebc095ce1edc13fe8b23784c8f416dfd false
check_key b1162f952808434e4d2562ffda98bd311613d655d8cf85dc86e0a6c59f7158bc true
check_key 5a69adcd9e4f5b0020467e968d85877cb3aa04fa86088d4499b57ca65a665836 true
check_key 61ab47da432c829d0bc9d4fdb59520b135428eec665ad509678188b81c7adf49 false
check_key 154bb547f22f65a87c0c3f56294f5791d04a3c14c8125d256aeed8ec54c4a06e true
check_key 0a78197861c30fd3547b5f2eabd96d3ac22ac0632f03b7afd9d5d2bfc2db352f true
check_key 8bdeadcca1f1f8a4a67b01ed2f10ef31aba7b034e8d1df3a69fe9aebf32454e0 false
check_key f4b17dfca559be7d5cea500ac01e834624fed9befae3af746b39073d5f63190d true
check_key 622c52821e16ddc63b58f3ec2b959fe8c6ea6b1a596d9a58fd81178963f41c01 true
check_key 07bedd5d55c937ef5e23a56c6e58f31adb91224d985285d7fef39ede3a9efb17 false
check_key 5179bf3b7458648e57dc20f003c6bbfd55e8cd7c0a6e90df6ef8e8183b46f99d true
check_key 683c80c3f304f10fdd53a84813b5c25b1627ebd14eb29b258b41cd14396ef41f true
check_key c266244ed597c438170875fe7874f81258a830105ca1108131e6b8fea95eb8ba true
check_key 0c1cdc693df29c2d1e66b2ce3747e34a30287d5eb6c302495634ec856593fe8e true
check_key 28950f508f6a0d4c20ab5e4d55b80565a6a539092e72b7eb0ed9fa5017ecef88 false
check_key 8328a2a5fcfc4433b1c283539a8943e6eb8cc16c59f29dedc3af2c77cfd56f25 true
check_key 5d0f82319676d4d3636ff5dc2a38ea5ec8aeaac4835fdcab983ab35d76b7967b false
check_key cafcc75e94a014115f25c23aaae86e67352f928f468d4312b92240ff0f3a4481 false
check_key 3e5fdd8072574218f389d018e959669e8ca4ef20b114ea7dce7bfb32339f9f42 true
check_key 591763e3390a78ccb529ceea3d3a97165878b179ad2edaa166fd3c78ec69d391 true
check_key 7a0a196935bf79dc2b1c3050e8f2bf0665f7773fc07511b828ec1c4b1451d317 false
check_key 9cf0c034162131fbaa94a608f58546d0acbcc2e67b62a0b2be2ce75fc8c25b9a false
check_key e3840846e3d32644d45654b96def09a5d6968caca9048c13fcaab7ae8851c316 false
check_key a4e330253739af588d70fbda23543f6df7d76d894a486d169e5fedf7ed32d2e2 false
check_key cfb41db7091223865f7ecbdda92b9a6fb08887827831451de5bcb3165395d95d true
check_key 3d10bd023cef8ae30229fdbfa7446a3c218423d00f330857ff6adde080749015 false
check_key 4403b53b8d4112bb1727bb8b5fd63d1f79f107705ffe17867704e70a61875328 false
check_key 121ef0813a9f76b7a9c045058557c5072de6a102f06a9b103ead6af079420c29 true
check_key 386204cf473caf3854351dda55844a41162eb9ce4740e1e31cfef037b41bc56e false
check_key eb5872300dc658161df469364283e4658f37f6a1349976f8973bd6b5d1d57a39 true
check_key b8f32188f0fc62eeb38a561ff7b7f3c94440e6d366a05ef7636958bc97834d02 false
check_key a817f129a8292df79eef8531736fdebb2e985304653e7ef286574d0703b40fb4 false
check_key 2c06595bc103447b9c20a71cd358c704cb43b0b34c23fb768e6730ac9494f39e true
check_key dd84bc4c366ced4f65c50c26beb8a9bc26c88b7d4a77effbb0f7af1b28e25734 false
check_key 76b4d33810eed637f90d49a530ac5415df97cafdac6f17eda1ba7eb9a14e5886 true
check_key 926ce5161c4c92d90ec4efc58e5f449a2c385766c42d2e60af16b7362097aef5 false
check_key 20c661f1e95e94a745eb9ec7a4fa719eff2f64052968e448d4734f90952aefee false
check_key 671b50abbd119c756010416e15fcdcc9a8e92eed0f67cbca240c3a9154db55c0 false
check_key df7aeee8458433e5c68253b8ef006a1c74ce3aef8951056f1fa918a8eb855213 false
check_key 70c81a38b92849cf547e3d5a6570d78e5228d4eaf9c8fdd15959edc9eb750daf false
check_key 55a512100b72d4ae0cfc16c75566fcaa3a7bb9116840db1559c71fd0e961cc36 false
check_key dbfbec4d0d2433a794ad40dc0aea965b6582875805c9a7351b47377403296acd true
check_key 0a7fe09eb9342214f98b38964f72ae3c787c19e5d7e256af9216f108f88b00a3 true
check_key a82e54681475f53ced9730ee9e3a607e341014d9403f5a42f3dbdbe8fc52e842 true
check_key 4d1f90059f7895a3f89abf16162e8d69b399c417f515ccb43b83144bbe8105f6 true
check_key 94e5c5b8486b1f2ff4e98ddf3b9295787eb252ba9b408ca4d7724595861da834 false
check_key d16e3e8dfa6d33d1d2db21c651006ccddbf4ce2e556594de5a22ae433e774ae6 false
check_key a1b203ec5e36098a3af08d6077068fec57eab3a754cbb5f8192983f37191c2df false
check_key 5378bb3ec8b4e49849bd7477356ed86f40757dd1ea3cee1e5183c7e7be4c3406 false
check_key 541a4162edeb57130295441dc1cb604072d7323b6c7dffa02ea5e4fed1d2ee9e true
check_key d8e86e189edcc4b5c262c26004691edd7bd909090997f886b00ed4b6af64d547 false
check_key 18a8731d1983d1df2ce2703b4c85e7357b6356634ac1412e6c2ac33ad35f8364 false
check_key b21212eac1eb11e811022514c5041233c4a07083a5b20acd7d632a938dc627de true
check_key 50efcfac1a55e9829d89334513d6d921abeb237594174015d154512054e4f9d1 true
check_key 9c44e8bcba31ddb4e67808422e42062540742ebd73439da0ba7837bf26649ec4 true
check_key b068a4f90d5bd78fd350daa129de35e5297b0ad6be9c85c7a6f129e3760a1482 false
check_key e9df93932f0096fcf2055564457c6dc685051673a4a6cd87779924be5c4abead true
check_key eddab2fc52dac8ed12914d1eb5b0da9978662c4d35b388d64ddf8f065606acaf true
check_key 54d3e6b3f2143d9083b4c98e4c22d98f99d274228050b2dc11695bf86631e89f true
check_key 6da1d5ef1827de8bbf886623561b058032e196d17f983cbc52199b31b2acc75b true
check_key e2a2df18e2235ebd743c9714e334f415d4ca4baf7ad1b335fb45021353d5117f true
check_key f34cb7d6e861c8bfe6e15ac19de68e74ccc9b345a7b751a10a5c7f85a99dfeb6 false
check_key f36e2f5967eb56244f9e4981a831f4d19c805e31983662641fe384e68176604a true
check_key c7e2dc9e8aa6f9c23d379e0f5e3057a69b931b886bbb74ded9f660c06d457463 true
check_key b97324364941e06f2ab4f5153a368f9b07c524a89e246720099042ad9e8c1c5b false
check_key eff75c70d425f5bba0eef426e116a4697e54feefac870660d9cf24c685078d75 false
check_key 161f3cd1a5873788755437e399136bcbf51ff5534700b3a8064f822995a15d24 false
check_key 63d6d3d2c21e88b06c9ff856809572024d86c85d85d6d62a52105c0672d92e66 false
check_key 1dc19b610b293de602f43dca6c204ce304702e6dc15d2a9337da55961bd26834 false
check_key 28a16d02405f509e1cfef5236c0c5f73c3bcadcd23c8eff377253941f82769db true
check_key 682d9cc3b65d149b8c2e54d6e20101e12b7cf96be90c9458e7a69699ec0c8ed7 false
check_key 0000000000000000000000000000000000000000000000000000000000000000 true
check_key 0000000000000000000000000000000000000000000000000000000000000080 true
check_key 0100000000000000000000000000000000000000000000000000000000000000 true
check_key 0100000000000000000000000000000000000000000000000000000000000080 false
check_key 0200000000000000000000000000000000000000000000000000000000000000 false
check_key 0200000000000000000000000000000000000000000000000000000000000080 false
check_key 0300000000000000000000000000000000000000000000000000000000000000 true
check_key 0300000000000000000000000000000000000000000000000000000000000080 true
check_key 0400000000000000000000000000000000000000000000000000000000000000 true
check_key 0400000000000000000000000000000000000000000000000000000000000080 true
check_key 0500000000000000000000000000000000000000000000000000000000000000 true
check_key 0500000000000000000000000000000000000000000000000000000000000080 true
check_key 0600000000000000000000000000000000000000000000000000000000000000 true
check_key 0600000000000000000000000000000000000000000000000000000000000080 true
check_key 0700000000000000000000000000000000000000000000000000000000000000 false
check_key 0700000000000000000000000000000000000000000000000000000000000080 false
check_key 0800000000000000000000000000000000000000000000000000000000000000 false
check_key 0800000000000000000000000000000000000000000000000000000000000080 false
check_key 0900000000000000000000000000000000000000000000000000000000000000 true
check_key 0900000000000000000000000000000000000000000000000000000000000080 true
check_key 0a00000000000000000000000000000000000000000000000000000000000000 true
check_key 0a00000000000000000000000000000000000000000000000000000000000080 true
check_key 0b00000000000000000000000000000000000000000000000000000000000000 false
check_key 0b00000000000000000000000000000000000000000000000000000000000080 false
check_key 0c00000000000000000000000000000000000000000000000000000000000000 false
check_key 0c00000000000000000000000000000000000000000000000000000000000080 false
check_key 0d00000000000000000000000000000000000000000000000000000000000000 false
check_key 0d00000000000000000000000000000000000000000000000000000000000080 false
check_key 0e00000000000000000000000000000000000000000000000000000000000000 true
check_key 0e00000000000000000000000000000000000000000000000000000000000080 true
check_key 0f00000000000000000000000000000000000000000000000000000000000000 true
check_key 0f00000000000000000000000000000000000000000000000000000000000080 true
check_key 1000000000000000000000000000000000000000000000000000000000000000 true
check_key 1000000000000000000000000000000000000000000000000000000000000080 true
check_key 1100000000000000000000000000000000000000000000000000000000000000 false
check_key 1100000000000000000000000000000000000000000000000000000000000080 false
check_key 1200000000000000000000000000000000000000000000000000000000000000 true
check_key 1200000000000000000000000000000000000000000000000000000000000080 true
check_key 1300000000000000000000000000000000000000000000000000000000000000 true
check_key 1300000000000000000000000000000000000000000000000000000000000080 true
check_key daffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key daffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key dbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key dbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key dcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key dcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key ddffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key ddffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key deffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key deffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key dfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key dfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key eaffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key eaffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key ebffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key ebffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key ecffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key ecffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key edffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key edffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key eeffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key eeffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key efffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key efffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key faffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key faffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key fbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key fbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key fcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key fcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key fdffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key fdffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
hash_to_ec da66e9ba613919dec28ef367a125bb310d6d83fb9052e71034164b6dc4f392d0 52b3f38753b4e13b74624862e253072cf12f745d43fcfafbe8c217701a6e5875
hash_to_ec a7fbdeeccb597c2d5fdaf2ea2e10cbfcd26b5740903e7f6d46bcbf9a90384fc6 f055ba2d0d9828ce2e203d9896bfda494d7830e7e3a27fa27d5eaa825a79a19c
hash_to_ec ed6e6579368caba2cc4851672972e949c0ee586fee4d6d6a9476d4a908f64070 da3ceda9a2ef6316bf9272566e6dffd785ac71f57855c0202f422bbb86af4ec0
hash_to_ec 9ae78e5620f1c4e6b29d03da006869465b3b16dae87ab0a51f4e1b74bc8aa48b 72d8720da66f797f55fbb7fa538af0b4a4f5930c8289c991472c37dc5ec16853
hash_to_ec ab49eb4834d24db7f479753217b763f70604ecb79ed37e6c788528720f424e5b 45914ba926a1a22c8146459c7f050a51ef5f560f5b74bae436b93a379866e6b8
hash_to_ec 5b79158ef2341180b8327b976efddbf364620b7e88d2e0707fa56f3b902c34b3 eac991dcbba39cb3bd166906ab48e2c3c3f4cd289a05e1c188486d348ede7c2e
hash_to_ec f21daa7896c81d3a7a2e9df721035d3c3902fe546c9d739d0c334ed894fb1d21 a6bedc5ffcc867d0c13a88a03360c8c83a9e4ddf339851bd3768c53a124378ec
hash_to_ec 3dae79aaca1abe6aecea7b0d38646c6b013d40053c7cdde2bed094497d925d2b 1a442546a35860a4ab697a36b158ded8e001bbfe20aef1c63e2840e87485c613
hash_to_ec 3d219463a55c24ac6f55706a6e46ade3fcd1edc87bade7b967129372036aca63 b252922ab64e32968735b8ade861445aa8dc02b763bd249bff121d10829f7c52
hash_to_ec bc5db69aced2b3197398eaf7cf60fd782379874b5ca27cb21bd23692c3c885cc ae072a43f78a0f29dc9822ae5e70865bbd151236a6d7fe4ae3e8f8961e19b0e5
hash_to_ec 98a6ed760b225976f8ada0579540e35da643089656695b5d0b8c7265a37e2342 6a99dbfa8ead6228910498cc3ff3fb18cb8627c5735e4b8657da846c16d2dcad
hash_to_ec e9cdc9fd9425a4a2389a5d60f76a2d839f0afbf66330f079a88fe23d73eae930 8aa518d091928668f3ca40e71e14b2698f6cae097b8120d7f6ae9afba8fd3d60
hash_to_ec a50c026c0af2f9f9884c2e9b8464724ac83bef546fec2c86b7de0880980d24fb b07433f8df39da2453a1e13fd413123a158feae602d822b724d42ef6c8e443bf
hash_to_ec bf180e20d160fa23ccfa6993febe22b920160efc5a9614245f1a3a360076e87a 9d6454ff69779ce978ea5fb3be88576dc8feaedf151e93b70065f92505f2e800
hash_to_ec b2b64dfeb1d58c6afbf5a56d8c0c42012175ebb4b7df30f26a67b66be8c34614 0523b22e7f220c939b604a15780abc5816709b91b81d9ee1541d44bd2586bbd8
hash_to_ec 463fc877f4279740020d10652c950f088ebdebeae34aa7a366c92c9c8773f63a daa5fa72e70c4d3af407b8f2f3364708029b2d4863bbdde54bd67bd08db0fcad
hash_to_ec 721842f3809982e7b96a806ae1f162d98ae6911d476307ad1e4f24522fd26f55 4397c300a8cfcb42e7cc310bc975dc975ec2d191eaa7e0462998eb2830c34126
hash_to_ec 384da8d9b83972af8cbefc2da5efc744037c8ef40efa4b3bacc3238a6232963d 3c80f107e6868f73ef600ab9229a3f4bbe24f4adce52e6ab3a66d5d510e0670d
hash_to_ec e26f8adef5b6fe5bb01466bff0455ca23fda07e200133697b3b6430ca3332bde e262a58bcc1f8baf1980e00d5d40ba00803690174d14fb4c0f608429ce3df773
hash_to_ec 6e275b4ea4f085a5d3151aa08cf16a8c60b078e70be7ce5dac75b5d7b0eebe7c cb21b5a7744b4fcdc92ead4be0b04bcb9145e7bb4b06eff3bb2f0fe429b85108
hash_to_ec a0dde4561ad9daa796d9cd8a3c34fd41687cee76d128bf2e2252466e3ef3b068 79a2eb06bb7647f5d0aae5da7cf2e2b2d2ce890f25f2b1f81bfc5fef8c87a7d3
hash_to_ec dbaf63830e037b4c329969d1d85e58cb6c4f56014fd08eb38219bd20031ae27c 079c93ae27cd98075a487fd3f7457ad2fb57cdf12ec8651fedd944d765d07549
hash_to_ec 1e87ba8a9acf96948bc199ae55c83ab3277be152c6d0b1d68a07955768d81171 5c6339f834116791f9ea22fcc3970346aaeddacf13fbd0a7d4005fbd469492ca
hash_to_ec 5a544088e63ddf5b9f444ed75a75bc9315c4c50439522f06b4823ecaf5e8a08d e95ca0730d57c6469be3a0f3c94382f8490257e2e546de86c650bdbc6482eaee
hash_to_ec e4e06d92ebb036a5e4bb547dbaa43fd70db3929eef2702649455c86d7e59aa46 e26210ff8ee28e24ef2613df40aa8a874b5e3c1d07ae14acc59220615aa334dc
hash_to_ec 5793b8b32dcc0f204501647f2976493c4f8f1fa5132315226f99f29a5a6fdfce 656e390086906d99852c9696e831f62cb56fc8f85f9a5c936c327f23c7faf4fe
hash_to_ec 84f56fa4d7f12e0efd48b1f7c81c15d6e3843ebb419f4a27ec97028d4f9da19e 0cbd4f0cd288e1e071cce800877de6aef97b63fff867424a4f2b2bab25602608
hash_to_ec 242683ddf0a9fc55f6585de3aa64ea17c9c544896ff7677cd82c98f833bdf2ca 38c36d52314549213df7c7201ab7749a4724cbea92812f583bb48cabc20816ad
hash_to_ec a93ee320dc030aa382168c2eb6d75fce6e5a63a81f15632d514c6de8a7cfa5ee bd0a2facaa95bc95215a94be21996e46f789ee8beb38e75a1173b75fc686c505
hash_to_ec e36136601d84475d25c3f14efe030363d646658937a8a8a19a812d5e6deb5944 2fb93d78fae299c9f6b22346acfb829796ee7a47ec71db5456d8201bec6c35a3
hash_to_ec ba4b67d3d387c66baa4a32ec8b1db7681087e85076e71bab10036388c3aeb011 cc01329ce56f963bf444a124751c45b2c779ccb6dea16ca05251baca246b5401
hash_to_ec 3fbc91896a2585154d6f7094c5ab9c487e29a27951c226eec1235f618e44946b 7d983acbb901bf5497d0708392e5e742ec8c8036cbb0d03403e9929da8cc85a7
hash_to_ec a2da289fed650e9901f69a5f33535eb47c6bd07798633cbf6c00ce3172df76ac dca8a4d30ec2d657fefd0dba9c1c5fd45a79f665048b3cf72ac2c3b7363da1ac
hash_to_ec 99025d2d493f768e273ed66cacd3a5b392761e6bd158ca09c8fba84631ea1534 7ef5af79ab155ab7e1770a47fcd7f194aca43d79ec6e303c7ce18c6a20279b04
hash_to_ec 3cf1d01d0b70fb31f2a2f979c1bae812381430f474247d0b018167f2a2cd9a9f 7c53d799ec938a21bb305a6b5ca0a7a355fa9a68b01d289c4f22b36ce3738f95
hash_to_ec 639c421b49636b2a1f8416c5d6e64425fe51e3b52584c265502379189895668e 0b47216ae5e6e03667143a6cf8894d9d73e3152c64fb455631d81a424410e871
hash_to_ec 4ccf2c973348b7cc4b14f846f9bfcdcb959b7429accf6dede96248946841d990 7fd41f5b97ba42ed03947dd953f8e69770c92cc34b16236edad7ab3c78cbbb2e
hash_to_ec f76ae09fff537f8919fd1a43ff9b8922b6a77e9e30791c82cf2c4b8acb51363e 8e2c6bf86461ad2c230c496ee3896da33c11cc020fd4c70faa3645b329049234
hash_to_ec 98932da7450f15db6c1eef78359904915c31c2aa7572366ec8855180edb81e3a 86180adddfac0b4d1fb41d58e98445dde1da605b380d392e9386bd445f1d821c
hash_to_ec ab26a1660988ec7aba91fc01f7aa9a157bbc12927f5b197062b922a5c0c7f8dd 2c44a43eda0d0aad055f18333e761f2f2ec11c585ec7339081c19266af918e4f
hash_to_ec 4465d0c1b4930cc718252efd87d11d04162d2a321b9b850c4a19a6acdfca24f4 b03806287d804188a4d679a0ecee66f399d7bdc3bd1494f9b2b0772bbb5a034f
hash_to_ec 0f2a7867864ed00e5c40082df0a0b031c89fa5f978d9beb2fde75153f51cfb75 5c471e1b118ef9d76c93aec70e0578f46e8db1d55affd447c1f64c0ad9a5caa5
hash_to_ec 5c2808c07d8175f332cae050ce13bec4254870d76abff68faf34b0b8d3ad5000 eeff1d9a5aa428b7aecc575e63dde17294072eb246568493e1ed88ce5c95b779
hash_to_ec 36300a21601fad00d00da45e27b36c11923b857f97e50303bd01f21998eaef95 b33b077871e6f5dad8ff6bc621c1b6dedcf700777d996c8c02d73f7297108b7e
hash_to_ec 9e1afb76d6c480816d2cedd7f2ab08a36c309efaa3764dcdb51bad6049683805 4cd96ba7b543b1a224b8670bf20b3733e3910711d32456d3e58e920215788adf
hash_to_ec 685f152704664495459b76c81567a4b571e8b307dd0e3c9b08ee95651a006047 80dd6b637580cb3be76025867f1525852b65a7a66066993fda3af7eb187dc1a5
hash_to_ec 0b216444391a1163c14f7b27f9135e9747978c0e426dce1fa65c657f3e9146be 021259695a6854a4a03e8c74d09ab9630a401bfca06172a733fe122f01af90b4
hash_to_ec cfcb35e98f71226c3558eaa9cf620db5ae207ece081ab13ddea4b1f122850a5a 46763d2742e2cdffe80bb3d056f4d3a1565aa83f19aab0a1f89e54ad81ae0814
hash_to_ec 07e7292da8cdcdb58ee30c3fa16f1d609e9b3b1110dd6fa9b2cc18f4103a1c12 fe949ca251ac66f13a8925ae624a09cdbf6696d3c110442338d37700536e8ec7
hash_to_ec 813bc7e3749e658190cf2a4e358bc07a6671f262e2c4eef9f44c66066a72e6a7 6b92fbda984bd0e6f4af7a5e04c2b66b6f0f9d197a9694362a8556e5b7439f8a
hash_to_ec 89c50a1e5497156e0fae20d99f5e33e330362b962c9ca00eaf084fe91aaec71d ef36cb75eb95fb761a8fa8c376e9c4447bcd61421250f7a711bd289e6ed78a9b
hash_to_ec d9bd9ff2dd807eb25de7c5de865dbc43cce2466389cedbc92b90aab0eb014f81 30104771ff961cd1861cd053689feab888c57b8a4a2e3989646ea7dea40f3c04
hash_to_ec b8c837501b6ca3e118db9848717c847c062bf0ebeca5a7c211726c1426878af5 19a1e204b4a32ce9cccf5d96a541eb76a78789dceaf4fe69964e58ff96c29b63
hash_to_ec 84376c5350a42c07ac9f96e8d5c35a8c7f62c639a1834b09e4331b5962ecace8 ba1e4437d5048bd1294eadc502092eafc470b99fde82649e84a52225e68e88f2
hash_to_ec a3345e4a4cfc369bf0e7d11f49aed0d2a6ded00e3ff8c7605db9a919cf730640 0d318705c16e943c0fdcde134aaf6e4ccce9f3d9161d001861656fc7ea77a0b1
hash_to_ec 3c994dfb9c71e4f401e65fd552dc9f49885f88b8b3588e24e1d2e9b8870ffab1 984157de5d7c2c4b43b2bffea171809165d7bb442baea88e83b27f839ebdb939
hash_to_ec 153674c1c1b18a646f564af77c5bd7de452dc3f3e1e2326bfe9c57745b69ec5c e9a4a1e225ae472d1b3168c99f8ba1943ad2ed84ef29598f3f96314f22db9ef2
hash_to_ec 2d46a705d4fe5d8b5a1f4e9ef46d9e06467450eb357b6d39faa000995314e871 b9d1aec540bf6a9c0e1b325ab87d4fbe66b1df48986dde3cb62e66e136eba107
hash_to_ec 6764c3767f16ec8faecc62f9f76735f76b11d7556aeb61066aeaeaad4fc9042f 3a5c68fb94b023488fb5940e07d1005e7c18328e7a84f673ccd536c07560a57b
hash_to_ec c99c6ee5804d4b13a445bc03eaa07a6ef5bcb2fff0f71678dd3bd66b822f8be8 a9e1ce91deed4136e6e53e143d1c0af106abde9d77c066c78ebbf5d227f9dde0
hash_to_ec 3009182e1efac085c7eba24a7d9ef28ace98ebafa72211e73a41c935c37e6768 e55431a4c89d38bd95f8092cdf6e44d164ad5855677aba17ec262abc8c217c86
hash_to_ec e7153acd114a7636a207be0b67fa86fee56dd318f2808a81e35dd13d4251b2d0 ff2b98d257e4d4ff7379e8871441ca7d26e73f78f3f5afcf421d78c9799ba677
hash_to_ec 6378586744b721c5003976e3e18351c49cd28154c821bc45338892e5efedd197 3d765fb7bb4e165a3fa6ea00b5b5e22250f3861f0db0099626d9a9020443dda2
hash_to_ec 5be49aba389b7e3ad6def3ba3c7dbec0a11a3c36fc9d441130ef370b8a8d29c2 2d61faf38062dc98ae1aaafec05e90a925c9769df5b8b8f7090d9e91b2a11151
hash_to_ec f7bc382178d38e1b9a1a995bd8347c1283d8a2e8d150379faa53fd125e903d2b 544c815da65c3c5994b0ac7d6455578d03a2bc7cf558b788bcdb3430e231635a
hash_to_ec c28b5c4b6662eebb3ec358600644849ebeb59d827ed589c161d900ca18715fa8 a2d64db3c0e0353c257aadf9abc12ac779654d364f348b9f8e429aa7571203db
hash_to_ec 3a4792e5df9b2416a785739b9cf4e0d68aef600fa756a399cc949dd1fff5033a 4b54591bd79c30640b700dfb7f20158f692f467b6af70bd8a4e739c14a66c86a
hash_to_ec 002e70f25e1ceaf35cc14b2c6975a4c777b284a695550541e6f5424b962c19f5 73987e9342e338eb57a7a9e03bd33144db37c1091e952a10bd243c5bb295c18a
hash_to_ec 7eb671319f212c9cae0975571b6af109124724ba182937a9066546c92bdeff0c 49b46da3be0df1d141d2a323d5af82202afa2947a95b9f3df47722337f0d5798
hash_to_ec ca093712559c8edd5c51689e2ddcb8641c2960e5d9c8b03a44926bb798a0c8dc b9ef9cf0f8e4a3d123db565afafb1102338bfb75498444ac0a25c5ed70d615da
hash_to_ec cfea0a08a72777ff3aa7be0d8934587fa4127cd49a1a938232815dc3fd8b23ac b4de604b3d712f1ef578195fb0e53c865d41e2dfe425202c6cfe6f10e4404eb5
hash_to_ec aa0122ae258d6db21a26a31c0c92d8a0e3fdb46594aed41d561e069687dedcd6 5247eaec346de1c6cddf0ab04c12cd1d85cdb6d3a2fba2a5f9a5fe461abef5eb
hash_to_ec b3941734f4d3ba34ccaf03c4c737ac5a1e036eb74309300ce44d73aca24fef08 535938985c936e3780c61fe29a4121d6cb89a05080b6c2147031ea0c2b5b9829
hash_to_ec 8c2ee1041a2743b30dcbf413cc9232099b9268f82a5a21a09b63e7aff750882f 6ad0d4b3a65b522dfad0e9ac814b1fb939bc4910bd780943c72f57f362754cca
hash_to_ec 4b6829a2a2d46c8f0d0c23db0f735fcf976524bf39ccb623b919dd3b28ad5193 2e0097d7f92993bc45ba06baf4ca63d64899d86760adc4eb5eeefb4a78561050
hash_to_ec 9c1407cb6bba11e7b4c1d274d772f074f410d6fe9a1ee7a22cddf379257877d9 692261c7d6a9a7031c67d033f6d82a68ef3c27bd51a5666e55972238769821cd
hash_to_ec 638c42e4997abf8a4a9bffd040e31bd695d590cde8afbd7efd16ffdbae63bf66 793024c8ce196a2419f761dde8734734af6bd9eb772b30cc78f2cb89598dce97
hash_to_ec 1fb60d79600de151a1cf8a2334deb5828632cbd91cb5b3d45ae06e08187ae23d ff2542cde5bc2562e69471a31cfc3d0c26e2f6ccc1891a633b07a3968e42521c
hash_to_ec d2fdbbae4e38a1b734151c3df52540feb2d3ff74edfef2f740e49a5c363406ee 344c83ba6ff4e38b257077623d298d2f2b52002645021241bc9389f81b29ad12
hash_to_ec 836c27a6ddfe1a24aba3d6022dff6dfe970f142d8b4ac6afb8efcba5a051942f b8af481d33726b3f875268282d621e4c63f891a09f920b8f2f49080f3a507387
hash_to_ec 46281153ddcdf2e79d459693b6fe318c1969538dd59a750b790bfff6e9481abf 8eaf534919ab6573ba4e0fbde0e370ae01eae0763335177aa429f61c4295e9d4
hash_to_ec d57b789e050bf3db462b79a997dac76aa048d4be05f133c66edee56afd3dbe66 0c5a294cb2cbb6d9d1c0a1d57d938278f674867f612ed89dcbe4533449f1a131
hash_to_ec 548d524d03ac22da18ff4201ce8dbee83ad9af54ee4e26791d26ed2ab8f9bfc7 c6609d9e7d9fd982dec8a166ff4fb6f7d195b413aad2df85f73d555349134f3b
hash_to_ec cc920690422e307357f573b87a6e0e65f432c6ec12a604eb718b66ba18897a56 6f11c466d1c72fccd81e51d9bda03b6e8d6a395e1d931b2a84e392dc9a3efa18
hash_to_ec c7fb8a51f5fcd8824fc0875d4eb57ab4917cb97090a6e2288f852f2bb449edd9 45543fea6eed461016e48598b521f18ff70178afea18032b188deea3e56052fc
hash_to_ec c681bb1b829e24b1c52cb890036b89f0029d261c6a15e5b2c684ee7dfe91e746 263006fe2c6b08f1ab29cdf442472c298e2faf225bbf5c32399d3745cd3904bd
hash_to_ec e06411c542312fdd305e17e46be14c63bab5836dc8751da06164b1ae22d4e20f 901871be7a7ff5aecade2acff869846f3c50de69307ac155f2aa3a74d5472ef2
hash_to_ec 9c725a2acb80fa712f9781da510e5163b1b30f4e1c064c26b5185e537f0614ea 02420d49257846eb39fddd196d3171679f6be21d9adac667786b65a6e90f57b1
hash_to_ec 22792772820feafa85c5cb3fa8f876105251bef08617d389619697f47dff54f2 a3ad444e7811693687f3925e7c315ae55d08d9f4b0a29876bc2a891ab941c1c3
hash_to_ec 0587b790121395d0f4f39093d10b4817f58a1e80621a24eea22b3c127d6ac5a2 86c417c695c64c7becaad0d59ddbb2bca4cb2b409a21253d680aac1a08617095
hash_to_ec fa0b5f28399bef0cd87bfe6b8a2b69e9c5506fb4bacd22deba8049615a5db526 ede0ea240036ff75d075258a053f3ce5d6f77925d358dbe33c06509fc9b12111
hash_to_ec 62a3274fc0bed109d5057b865c2ba6b6a5a417cb90a3425674102fcd457ede2d ff7e46751bb4dcd1e800a8feab7cf6771f42dc0cfed7084c23b8a5d255a6f34e
hash_to_ec a6fcd4aecaaaf281563b9b7cd6fbc7b1829654f644f4165942669a2ef632b2bf 28f136be0eb957a5b36f8ec294399c9f73ad3a3c9bb953ad191758ced554a233
hash_to_ec 01baa4c06d6676c9b286cda76ed949fd80a408b3309500ba84a5bb7e3dce58e2 a943d1afa2efce284740e7db21ea02db70b124808be2ff80cbf9b9cb96c7b73e
hash_to_ec dd9aff9c006ba514cef8fae665657bc9813fe2715467cf479643ea4c4e365d6d 68de2f7d49de4004286ce0989a06a686b15d0f463a02ffd448a18914e1ddf713
hash_to_ec 3df3513d5e539161761ce7992ab9935f649bc934bed0da3c5e1095344b733bb9 e9c2dd747d7b2482474325943cd850102b8093164678362c7621993a790e2a8a
hash_to_ec 7680cfb244dc8ef37c671fff176be1a3dad00e5d283f93145d0cbee74cca2df4 a0fd8c3cca16a130eaa5864cbe8152b7adfbf09e8cf72244b2fc8364c3b20bf4
hash_to_ec 8a547c38bd6b219ea0d612d4a155eba9c56034a1405dcf4b608de787f37e0fd8 76bf0dc40fd0a5508c5e091d8bb7eccfa28b331e72c6a0d4ac0e05a3d651850b
hash_to_ec dd93901621f58465e9791012afa76908f1e80ad80e52b809dc7fc32bb004f0a8 09a0b7ecfe8058b1e9ee01c9b523826867ca97a32efad29ac8ceebca67a4ea00
hash_to_ec b643010220f1f4ee6c7565f6e1b3dc84c18274ede363ac36b6af3707e69a1542 233c9ff8de59e5f96c2f91892a71d9d93fa7316319f30d1615f10ac1e01f9285
hash_to_ec c2637b2299dfc1fd7e953e39a582bafd19e6e7fff3642978eb092b900dbfea80 339587ba1c05e2cba44196a4be1fd218b772199e2c61c3c0ff21dcd54b570c43
hash_to_ec 1f36d3a7e7c468eb000937de138809e381ad2e23414cbbaac49b7f33533ed486 7e5b0a96051c77237a027a79764c2763487af88121c7774645e97827fb744888
hash_to_ec 8c142a55f60b2edbe03335b7f90aa2bd63e567048a65d61c70cb28779c5200af d3d6d5563b3d81c8c91cf9806bb13b2850fb7c162c610fd2f5b83c464add8182
hash_to_ec 99e7b98293c9de1f81aff1376485a990014b8b176521b2a68cdbde6300190398 119cbc01a1d9b9fb4759031d3a70685aebea0f01bc5ee082ce824265fd21b3b4
hash_to_ec 9753bd38be072b51490290be6207ca4545e3541bdf194e0850ae0a9f9e64b8ba 1ad3aa759863153606fa6570f0e1290baded4c8c1f2ba0f67c1911bfc8ccd7a0
hash_to_ec 322703864ceee19b7f17cec2a822f310f0c4da3ff98b0be61a6fd30ac4db649c 89d9e7a5947e1cde874e4030de278070aae363063cd3592ce5411821474f0816
hash_to_ec c1acd01e1e535fad273a8b757d981470f43dd7d95af732901fbba16b6e245761 57e80445248111150da5e63c706b4abbf3eef2cc508bd0347ff6b81e8c59f5bc
hash_to_ec 492473559f181bbe78f60215bc6d3a5168435ea2fc0a508372d6f5ca126e9767 df3965f137cf6f60c56ebd7c8f246281fd6dc92ce23a37e9f846f8452c884e01
hash_to_ec afa9d6e0e2fb972ee806beb450c2c0165e58234b0676a4ec0ca19b6e710d7c35 669a57e69dd2845a5e50ed8e5d8423ac9ae792a43c7738554d6c5e765a7b088a
hash_to_ec 094de050bdadef3b7dbaeeca29381c667e63e71220970149d97b95db8f4db61b 0cf5d03530c5e97850d0964c6a394de9cde1e8e498f8c0e173c518242c07f99a
hash_to_ec 2ce583724bc699ad800b33176a1d983512fe3cb3afa65d99224b23dae223efb7 e1548fd563c75ae5b5366dbab4cb73c54e7d5e087c9e5453125ff8fbe6c83a5c
hash_to_ec 8064974b976ff5ef6adaade6196ab69cda6970cd74f7f5899181805f691ad970 98ae63c47331a4ac433cb2f17230c525982d89d21e2838515a36ec5744ec2d15
hash_to_ec 384911047de609c6ae8438c745897357989363885cef2381a8a00a090cf04a58 4692ec3a0a03263620841c108538d584322fdd24d221a74bf1e1f407f83828af
hash_to_ec 0e1b1ced5ae997ef9c10b72cfc6d8c36d7433c01fc04f4083447f87243282528 6ee443ab0637702b7340bd4a908b9e2e63df0cc423c409fb320eb3f383118b80
hash_to_ec 5a7aea70c85c040af6ff3384bcaa63ec45c015b55b44fffa37ab982a00dc57c5 2df2e20137cefd166c767646ecd2e386d28f405aebe43d739aa55beba04ed407
hash_to_ec 3e878a3567487f20f7c98ea0488a40b87f1ba99e50bbfe9f00a423f927cbd898 697c7e60e4bf8c429ba7ac22b11a4b248d7465fc6abe597ec6d1e1c973330688
hash_to_ec c0bb08350d8a4bb6bf8745f6440e9bd254653102a81c79d6528da2810da758e4 396a872ac9147a69b27223bf4ec4198345b26576b3690f233b832395f2598235
hash_to_ec 6c3026a9284053a4ddb754818f9ae306ffa96eb7003bd03826eeccc9a0cf656e bef73da51d3ba9972a33d1afb7d263094b66ab6dbe3988161b08c17f8c69c2d5
hash_to_ec f80b7d8f5a80d321af3a42130db199d9edcb8f5a82507d8bfca6d002d65458b6 aa59c167ea60ee024421bfbd00adbb3cbfc20e16bd3c9b172a6bef4d47ca7f57
hash_to_ec bc0ffc24615aa02fafef447f17e7b776489cd2cc909f71e8344e01cad9f1610d 5c4195cc8dc3518143f06a9c228ae59ec9a6425a8fab89bfc638ad997cf35220
hash_to_ec b15fad558737229f8816fcba8fbef805bd420c03e392d118c69bdf01890c4924 f5810477e37554728837f097e1b170d1d8c95351c7fff8abbbfc624e1a50c1b9
hash_to_ec ec8c1f10d8e9da9cf0d57c4a1f2c402771bed7970109f3cf21ad32111f1f198f a697e0a3f09827b0cf3a4ffb6386388feda80d30ffffcbd54443dafcba162b28
hash_to_ec a989647bf0d70fdb7533b8c303a2a07f5e42e26a45ffc4e48cff5ba88643a201 450fd73e636f94d0d232600dd39031386b0e2ecde4105124fc451341da9803db
hash_to_ec 7159971b03c365480d91d625a0fadc8e3a632c518acf0dbec87dd659da70e168 377bc43c038ac46cf6565aa0a6d6bf39968c0c1142755dba3141eeebf0acdf5d
hash_to_ec e39089a64fedac4b2c25e36312b33f79d02bf75a883f450f910915b8560a3b06 77efa7db1be020e77596f550de45626824a8268095d56a0991696b211cb329cc
hash_to_ec 2056b3c6347611bb0929dad00ec932a4d9bec0f06b2d57f17e01ffa1528a719e b6072c2be2ce928e8cbbb87e8eb7e06975c0f93b309dd3b6a29edaad2b56f99b
hash_to_ec 2c026793146e81b889fc741d62e06c341ce263560d57cd46d0376f5b29174489 8f1f64b67762aa784969e954c196a2c6610addc3604aa3291eb0b80304dfe9ef
hash_to_ec be6026d6704379c489fa7749832b58bdb1a9685a5ffb68c438537f2f76e0011f 0072569a4090a9ad383a205bb092196c9de871c22506e3bb63d6b9d1b2357c96
hash_to_ec f4db802d5c6b7d7b53663b03d988b4cd0c7cad6c26612c5307754a93ebdc9710 f21bc9be4cb28761f6fe1d0a555ad5e9748375a2e9faea25a1df75cc8d273e18
hash_to_ec c27d79a564c56b00956a55090481e85fbc837fd5fb5e8311ecb436e300c07e3a 1b1891e6abec74621501450cd68bb1eeaa5b2fffff4ec441a55d1235ff3a0842
hash_to_ec a1e2f93c717cad32af386efa624198973df5a710963dd19d4c3ac40032a3a286 69c60571e3f9f63d2bfb359386ae3b8cd9e49a2e9127753002866e85c0443573
hash_to_ec 76920d7b1763474bc94a16433c3c28241a9acdee3ff2b2cb0e6757ba415310aa c1b409169f102b696fc7fa1aa9c48631e58e08b5132b6aadf43407627bb1b499
hash_to_ec 57ac654b29fa227c181fff2121491fcb283af6cbe932c8199c946862c0e90cb2 a204e8d327ea93b0b1bd74a78ffc370b20cea6455e209f2bc258114baa16d728
hash_to_ec 88e66cfaef6432b759c50efce885097d1752252b479dac5ed822fa6c85d56427 6fb84790d3749a5c1088209ee3823848d9c19bf1524215c44031143dd8080d70
hash_to_ec c1e55da929c4f8f793696fc77ff4e1c317c34852d98403bfd15dd388ee7df0df 2f41e76f15c5b480665bd84067e3b543b85ce6de02be9da7a550b5e1ead94d34
hash_to_ec 29e9ace5aa3c5a572b13f4b62b738a764d90c8c293ccb062ad798acbab7c5ef4 bce791aba1edc2a66079628fd838799489ab16b0a475ce7fe62e24cc56fe131c
hash_to_ec f25b2340689dadacaa9a0ef08aee8447d80b982e8a1ea42cf0500a1b9d85b37d f7f53aa117e6772a9abc452b3931b0a99405ac45147e7c550ac9fcf7ffe377b5
hash_to_ec 0cb6c47fc8478063b33f5aed615a05bcc84d782c497b6cc8e76ec1fa11edbfdb 7a0b58b03147e7c9be1d98de49ead2ce738d0071b0af8ca03cc92ceb26fc2246
hash_to_ec 7bd7287d7c4b596fe46fe57a6982c959653487bea843a77dd47d40986200d576 343084618c58284c64a5ff076f891be64885dc2ac73fa1567f7b39fde6b91542
hash_to_ec e4984bf330708152254fb18ecef12d546afd24898a3cf00fba866957b6ee1b82 c70e88b061656181fbd6ff12aca578fb66de5553c756ea4698a248b177185bc6
hash_to_ec cefd6c3cb9754ea632d6aea140af017de5ea12e5184f868936b74d9aa349d603 4b476502a8a483aadd50667f262f95351901628dd3a2aac1a5a41c4ea03f1647
hash_to_ec da5d0f33344ee7f3345204badf183491b9452b84bccc907602c7bad43e5cf43e 9561b9e61241625e028361494d4fa5cd78df4c7219fa64c8fede6d8421b8904a
hash_to_ec d6f0a4f8c770a1274a76fd7ae4e5faf7779249263e1aaecc6f815cf376f5c302 cd5c55820be10f0d38feb81363ede3716a9168601a0dd1ce3109aab81367d698
hash_to_ec b6bf32491d12a41c275d8518fc534d9a0d17aade509e7e8b8409a95c86167307 4aae534abbd67a9a8f2974154606c0e9be8932e920c7a5e931b46a92859acf82
hash_to_ec 0f930beaad041f9cefd867bc194027dd651fb3c9bda5944ececdba8a7136b6d3 521708f8149891b418d0920369569a9d578029c78f8e41c68a0bb68d3ad5df60
hash_to_ec 49b1fe0f97be74b81e0b047027b3e9f726fa5e90a67dafa877309397291c06c5 0852e59dfae5ec32cce606c119376597bce5cd4d04879d329f74e3ec66414cd3
hash_to_ec 4d57647d03f2cfbd4782fcc933e0683b52d35fc8d37283e6c7de522ddfa7e698 cbeb9ebfbbc49ec81fac3b7b063fecac1bb40ea686d3ffb08f82b291715cd87f
hash_to_ec 4ea3238c06fc9346c7421ff85bc0244b893860b94bc437378472814d09b2e99f a1fbae941adc344031bbdf53385dfdc012311490a4eb5e9a2749a21b27ce917a
hash_to_ec 0cd3609f5c78b318cb853d189b73b1ee2d00edd4e5fce2812027daa3fcb1fed1 0c7a7241b16e3c47d41f5abbf205797bd4b63fc425a7120cb2a4bf324e08ae74
hash_to_ec d74ab71428e36943c9868f70d3243469babd27988a1666a06f499a5741a52e3e 65b7c259f3b4547c082b2a7669b2b363668c4d87ac14e80471317b03b34e5216
hash_to_ec f6b151998365e7d69bcbce383dd2e8b5bf93b8b72f029ff942588208c1619591 6ce840ce5dfbca238665c1e6eddb8b045aa85c69b5976fc55ab57e66d3d0a791
hash_to_ec 207751de234b2bd7ec20bdd8326210c23aa68f04875c94ad7e256a96520f25d6 fc8f79ab3af317c38bfb88f40fb84422995a0479cfa6b03fa6df7f4e5f2813fb
hash_to_ec 62291e2873f38c0a234b77d1964205f3f91905c261d3c06f81051a9b0cb787cb 076d1d767457518e6777cb3bd4df22c8a19eb617e4bbccd1b0bd37522d6597a5
hash_to_ec 4b060df2d2854036751d00190ee821cb0066d256d4172539fdfa6fbd1cdfe1f9 59866e927c69e7de5df00dc46c0d2a1ddf799d901128ff040cebb8fd61b95da4
hash_to_ec ac8daf73f9c609bb36bce4fdeec1e50be5f22de38c3904fabcf758f0fc180bc7 7d8dc4e956363b652468a5fecafd7c08d48a2297e93b8edcb38e595fdd5a1fde
hash_to_ec fef7b6563fd27f3aab1d659806b26b8f2ec38bc8feefad50288383c001d1c20f e6e42547f12df431439d45103d2c5a583248f44554a98a3a433cf8c38b11805d
hash_to_ec 40a3d6871c76ecc6bb7b28324478733e196cc11d062dd4c9265cf31be5cf5a97 8c55a3811c241a020b1be202a58d5defbc4c8945d73b132570b47dd7c019ccf0
hash_to_ec 0cd71e7e562b2b47f4bc8640caf20e69d3a62f10231b4c7a372c9691cff9ac3c fb8e4e3de479b3bf1f4f13b4ed5507df1e80bd9250567b9d021b03339d6e7197
hash_to_ec 40a4e62800a99b7a26e0b507ffb29592e5bdba25284dc473048f24b27d25b40a 90ae131d29ee4a71cd764ab26f1ca4e6d09a40db98f8692b345c3a0e130dc860
hash_to_ec 1ddf35193cf52860bfe3e41060a7f44281241c6ae49cd541d24c1aca679b7501 3b4f50013895c522776ced456329c4e727de03575f6b99ae7d238a9f70862121
hash_to_ec 014e0fa8ce9d5df262b9a1765725fde354a855de8aef3fc23684e05dd1ba8d34 3857f57776a3cb68721bcb7f1533a5f9fb416a1dc8824d719399b63a142d24de
hash_to_ec 09987979b0e98d1d5355df8a8698b8f54d3a037d12745c0a4317fe519c3df9cc 32a181e2b754aeced214c73ac459c97d99e63317be3eb923344c64a396173bca
hash_to_ec 51e9e8ec4413e92dbaaba067824c32b018487a8d16412ed310507b4741e18eed 0356b209156b4993fd5d5630308298429a1b0021c19bedecb7719ac607cfa644
hash_to_ec 14d91313dfe46e353310e6a4a23ee15d7a4e1f431700a444be8520e6043d08d9 6f345f4018b5d178d9f61894d9f46ac09ff639483727b0d113943507cee88cfd
hash_to_ec 0d5af9ace87382acfffb9ab1a34b6e921881aa015d4f6d9c73171b2b0a97600d a8dbf36c85bebe6a7b3733e70cd3cd9ed0eb282ca470f344e5fcf9fe959f2e6e
hash_to_ec 996690caac7328b19d20ed28eb0003d675b1a9ff79055ab530e3bf170eb22a94 14340d7d935cffce74b8b2f325c9d92ce0238b51807ef2c1512935bb843194ce
hash_to_ec ad839c4b4c278c8ebe16ff137a558255a1f74646aa87c6cd99e994c7bb97ce8a d4f2da327ffded913b50577be0e583db2b237b5ca74da648e9b985c247073b76
hash_to_ec 26fc2eeeee983e1300d72362fdff42edf08038e4eee277a6e2dbd1bd8c9d6560 3468b8269728c2c0bfc2e53b1575415124798bc0f59b60ea2f14967fc0ca19ce
hash_to_ec db33cecaf4ee6f0ceba338cc5fabfb7462cd952a9c9007357ff3f0ca8336f8bc 0bab38f58686d0ff770f770a297971510bc83e2ff2dfead34823d1c4d67f11af
hash_to_ec a0ee84b3c646526fb8787d26dcd9b7fe9dc713c8a6c1a4ea640465a9f36a64df 4d7a638f6759d3ec45339cd1300e1239cca5f0f658ca3cd29bc9bdb32f44faf0
hash_to_ec 6a702e7899fcf3988e2b6b55654c22e54f43d3fa29de19177bdff5b2295fe27f 145d5748d6054fb586568e276f6925aef593a5b9c8249ad3dbef510af99b4307
hash_to_ec 30ce0fd4f1fac8b62d613b8ee4a66deef6eb7094bd8466531050b837460f6971 f3aa850d593ba7cef01389f7e1916e57617f1d75cd42f64ce8f5f272384b148c
hash_to_ec 3aa31d4ad7046ad13d83eb11c9a6e90eb8483a374a77a9a7b2a7cc0978fefa76 2fe0827dc080d9c1e7ec475a78aa7ae3c86d1a35f4c3f25f4a1f7299cacf018a
hash_to_ec 8562a5a91e763b98014523ebb6e49120979098f89c31df1fde9eb3a49a15b20f ae223bf85e2009a9daf5fd8a14685e2e1e625fc88818b2fd437dd7e109a48f59
hash_to_ec ccf9c313a47b8dbf7ce42c94b785818bc24134d95b6d22acc53c1ec2be29cf27 3e79fce6fe5aa14251b6560df4b76e811d7739eec097f27052c4403a283be71d
hash_to_ec d1e33cd6f8918618d5fb6d67ad8de939db8beaec4f115551eac64479b739b773 613fffcbe1bf48bb2d7bfd64fd97790a06025f8f2429edddb9ac145707847ecf
hash_to_ec 81eaeced34dd44e448d5dafa5715225e4956c90911c964a96ff7aa5b86b969bc 8f81177495d120a1357380164d677509b167f2958eb8b962b616c3951d426d8c
hash_to_ec 2bc001a29f8eab1c7377de69957ba365fb5bdaf9c2c220889709af920dfe27d3 9bcb3010038f366fa4c280eed6e914a23bfc402594d0b83d0e66730a465a565b
hash_to_ec 6feeb703c05e86c58d9fc5623f1af8657ecd1e75a14d18c4eedb642a8a393d16 6544628ba67ed0e14854961739c4d467fcf49d6361e39d32ea73dabeae51e6c3
hash_to_ec e8ff145a7c26897f2c1639edd333a5412f87752f110079f581ccdc87fcce208c d4b5a6e06069c7e012e32119f8eda08ff04a8dfa784e1cf1bced455a4d41d905
hash_to_ec 80488131dcb2018527908dbf8cdf4b823ef0806dc1d360f4da671004ef7ff74d 9984a79d9fd4f317768b442161116eef84e2ca49e938642b268fd64312d59a27
hash_to_ec d8c4ca60446849a784d1462aa26a3b93073ff6841cb2da3ef52ab9785b00b1fd da5ec1562e7de2382d35728312f4eea3608d4dba775c1c108de510e1ce97d059
hash_to_ec 68645728dfc6b9358dfb426493238ba38f24a2f46a3e89edb47d212549939cb7 d3253aa7235113dcc1b577d3bb80be34f528398815a653dbdbacbcbdfd5887a1
hash_to_ec 4e8eb97ba2d1046e1b42e67530a61441e31c84e5e5e448d8e8dbe75d104eaccb de94f73e83222aa0e39b559d4fef70387b0815b9b2f6beff5da67262d8f0eb3e
hash_to_ec 104ff03122ffdf59b22b8c0fe3d8f2ef67d02328e4d5181916d3d2a92f9a0bb7 1517ccf69c0328327e1cf581f16944ff66bc91c37e1cd68a99525415e00b7c9f
hash_to_ec 80f23aae7356ae9a2f9f7504495a731214d26f870fb7df68fdc00b233494156f 7aef046b0a70f84e8d239aa95e192b5a3fffa0fae5090c91273e8996beca9e38
hash_to_ec 2424b33235955a737ebddbf1c6c59cd8778af74da3bd3e658447666a2ab2f557 d19e2be8d482950fbdae429618da7a9daedb8c5944dea19cd1b6b274e792231b
hash_to_ec 0adc839d2b8f099e4341a4763b074c06318d6bcbd1ec558d20a9820c4a426463 cea5da12a84e5c20011726d9224a9930bec30f9571762dd7ca857b86bd37d056
hash_to_ec 46c84d53951f1ba23c46a23d5d96bf019c559aa5d2d79e4535cfcdb36f38ce25 2a913a01a6f7dd78a43cdd5354d1160d9a5f0d824c489a892c80eba798a77567
hash_to_ec 99bdaaf68555ccdc93d97c3a0fb4c126a1aa8b1202194a1a753401a6cae21055 1f645efe173577a092f2d847cc966e28ba3b36397fe84c96dfa4724ed4fcfdf9
hash_to_ec c540ff78f1e063ad26ffa69febb8818c9f2a325072c566091ad816e40fe39af4 de7a762262c91ab4beccc0713233cb91163aec43e34de0dbcfad0c431e8a9722
hash_to_ec de8b1ff8978cd5e02681521542b7b6c3c2f8f4602065059f83594809d04e3dda 290601e75207085bff3e016746e55a80310a76dea9ef566c24181079c76da11c
hash_to_ec d555994c8a022e52602d2a8bdd01fc1bfa6b9ab6734ff72a1bd5f937de4627f8 5f6794e874f48c4b362d0a24207374c2d274e28de86351afc6ddb95d8cc2fd62
hash_to_ec 19db72f703fe6f1b73f21b6ba133ae6b111ae8cc496d3aa32e02411e34c0d8d7 42f159f43d2d62b8cf8a47d5f1340c5cf070e9860fc60de647c55d50fe9f5607
hash_to_ec 23a87a258c2a5d1353aa2d5946f9e5749b92f85e3c58e1d177c3b6c3dcac809c e5685016f79d5e87d1fecb3e2a0fe64e4875f7accd2f6649d7f6b16317549cb1
hash_to_ec 43e1738d7d1b5b565f5fc78e81480f7edf9a4dc18f104fc4be95135b98931b17 650f5b682e45f2d0c5d5e8bcfd9e0cda7d9071b55ecbfaf5e3b59941cd7479f2
hash_to_ec a9d644de0804edf62dee613efa2547e510990a9b7a987ebe55ec74c23873a878 52ad329f88499a4f110e6a6cba1f820012d8db6ccb8f6495ab1e3eb5a24786e1
hash_to_ec 11f2b5d89a0350d7c8727becf0f4dd19bd90f8c94ff207132ab13282dd9b94e6 b798a47bb98dc2a8f99deaf64d27638e33a0d504c5d2fbee477a2bc9b89e2838
hash_to_ec 5e206e3190b3b715d125f1a11fff424fb33e36e534c99ddde2a3517068b7dcc4 2738e9571c96b2ddf93cb5f4a72b1ea78d3731d9555b830494513c0683c950ca
hash_to_ec efc3d65a43d4f10795c7265a76671348f80173e0f507c812f7ae76793b99c529 cf4434d18ce8167b51f117fe930860143c46e1739a8db1fba73b6b0de830d707
hash_to_ec 81f00469788aad6631cf75b585ae06d43ec81c20479925a2009afac9687dff60 c335b5889b36ba4b4175bb0d986807e8eedb6f6b7329b70b922e2ab729c4202a
hash_to_ec 9ef5ff329b525ee8f5c3ac38e1dba7cb19985617341d356707c67ff273aed02d bef9f9e051ba0e24d1fdf72099cf43ecdd250d047fb329855b5372d5c422db9e
hash_to_ec 3fa1401bd63132cf8b385c0fa65f0715ba1fe6161e41d59f8033ae2b22f63fa1 8289a1cb3c2dae48879bb8913fafe2d196cc2fdab5f2a77607910efd33eae6df
hash_to_ec 6559836fd0081fa38a3f8d8408b564e5698b9797cf5e15f7f12a7d2c84511989 28d405a6687d2ecc90c1c66bf0454d58f3fa38835743075e1db58c658e15a104
hash_to_ec 8e0882d45f0e4c2fb2839d3be86ff699d4b2242f5b25ac5a3c2f65297c7d2032 2771fdcf9135a62007adb5f0004d8222f0e42f819c81710aa4dc3ab2042bebf3
hash_to_ec 1d91dc4dd9bd82646029d13aca1af96830c1d8a0400ddebeb14b00c93501c039 7792c62e897f32cbc9c4229f0d28f7882ceeae120329a1cd35f76a75ac704e93
hash_to_ec 09527f9052acbbdd7676cbbd9534780865f04a27aaadad2b7d4f1dac68883cf0 b934220cde1327f2dc6af67bcb4124bf424d5084ef4da945e4daad1717cd0bb8
hash_to_ec 2362e1abe73e64cdd2ca7f6c5ea9f467213747dd3f2b7c6e5df9cb21e03307d7 676b7122b96564358bbaaf77e3a5a4db1767e4f9a50f6ddd1c69df4566755af9
hash_to_ec 26c2dd2356e9b6c68a415b25f91d18614dc8500c66f346d28489da543ee75a94 0f4fd7086acd68eb7c9fa2410e2ecf18e34654eb44e979bc03ce436e992d5feb
hash_to_ec 422dc0a09d6a45a8e0b563eeb6a5ee84b08abd3a8cb34ff93f77ba3b163f4042 631f1b412ff5a0fccbe53a02b4a3deaa93a0418ed9874df401eb698ef75d7441
hash_to_ec ceecdf46f57ef3f36ff30a1a3579b609340282d1b26ab5ddef2f53514e91bab1 9bc6f981fe98d14a2fc5b01a8134b6d35e123ec9ab8a3f303e0a5abb28150e2e
hash_to_ec 024a9e6e0d73f28aa6207fb1e02ce86d444d2d46f8211e8aaab54f459db91a5a 5fb0c1d2c3b30f399102104ea1874099fa83110b3d9c1fcfffb2981c98bf8cdf
hash_to_ec 5b8e45e269c9ccac4c68e532a72b29346d218f4606f37a14064826a62050e3a8 c7be46a871b77fc05ce891d24bd6bd54d9775b7ef573c6bc2d92b67f3604c1d1
hash_to_ec 9a6593a385c266389eef14237874b97bdcd1823c3199311667d4853c2d12aa81 9f55ee9d94102d2b9c5670f30586cf9823bf205b4d4fe088c323e87c4e10f26f
hash_to_ec 27377e2811598c3569b92990865d39b72c7a5533e1be30f77330863187c11875 abd82bc726f2710a8b87e4c1cf5a069f0ae800de614468d3ff35639983020197
hash_to_ec 7cacfaa135fb7d568b8dce8ea9136498b1b28c6d1020af45d376288d78d411f0 229fccd49744c0692508af329224553d21561ee6062b2b8a21f080f73da5bd97
hash_to_ec 52abd90a5542d6496b8dec9567b020f30058e29458d64f2d4f3ad6f3bfc1a5a0 874e82ced7cf77577b3374087fb08a2300b7f403de628310c26bdb3be869d309
hash_to_ec 5c8eebe9d12309187afa8d0d5191de3fdb84e5a05485d7cd62e8804ce7fdc0bc 12b7537643488aa8b9dcc4bae040cd491f8b466163b7988157b0502fb6c9177f
hash_to_ec 6ca3dd5c7a21a6bf65d6eefbe20a66e9b1d6b64196344be0c075f47aea48e3aa 5e1d0705ee24675238293b73ab1d98359119d4b328275be2460cc6ee4d19cc88
hash_to_ec d7e6cd0d39b4308c2a5ee547c4569c8bb3887e49cedece62d218d7c3c5277797 793dc4397112dfd9a8f4e061f457eb6d6fbb1d7a58c40bad5f16002c64914186
hash_to_ec 9cb6de8ba967cca0f0f861c6e20546f8958446595c01c28dae7ba6cfa09d6b14 ba1a2f7502b58fee3499c20e35fa01bb932e7a7c4a925dc04fbf5d90f33cfb5e
hash_to_ec 8ef9c7366733a1edcd116238cdbd177d61222d5c3e05b30ef6b85014cbcb6b79 8fc89664722947164ac9b77086aed319897612068f56ecd57f47029f14671603
hash_to_ec 7f317a34e4fb7de9f69cb107ffc0e57fd9f5c85b85ccb5319d05cebfc169924a 4b71c42339c73db7d710cd63f374d478a6c13bdc352cff40e967282268965ba7
hash_to_ec 15beef8d9687b92918a903b01d594859db4e7128263c8db0cae9d423ff962c1e cd75e6323952f6ac88f138f391b69f38c46d70b7eda61f9e431725b6f1d514a5
hash_to_ec 7a1c04c9af8fc6649833fe81e96f0199fcfe94959256cbe1490075fc5be0904e 0368270cd979439ae0a9552a5d6c9f959e4247fcf920d9e071464582e79c04b1
hash_to_ec c854c583d338615f85f69061e0fa9c9d7c5bbbfe562e8774fef3be556fe8bb63 061620171d7320f64bee98414ff7200a1f481521d202fb281cab06be73b80402
hash_to_ec 0fb8af5aba05ad2503edf1cfad5a451da088e7e974772057cd991a4e0601a3eb d3cbc20384a4420143fcce2cb763b0c15bec4f3267d1bdad3c34c1ee6b790f5e
hash_to_ec 9a251cf59e84a9da5630642f9671c732440caa8fcf4c92446a7e5f5ef99da46c 9b9679086a433f2077f40bcd4c7545fb5cc87e7dbb8bba468d53cb04a74361a0
hash_to_ec 8c632e357cef00e0911eb566f8cc809136b3f5ac1e82d183e4d645cef89fa155 5e06b0f4f278fa1ccb5431866e0b35171cdb814e2e82b9189ce01d8d8a1b2408
hash_to_ec 4aa4c31463475086a5d96b3ff550340567ab3b4a86fa3f01cfe9be18bc4dcb54 76a2916cfc093f27992e1f07b50f431d61d58e255507e208cd29ea4d3bc56623
hash_to_ec 1d33d9aadb949346e3c78d065a0f5262374524f4cb97a7390c8cdaede7ca6578 9ad2f757f499359903031adea6126c577469c4e834a2959e3ac08ee74b13783c
hash_to_ec d9217b9a070df20c4d2f0db42ff0bb36bfba9f51b0b6df8fdfe150405dce4934 65a843c522b4b8ec081a696a0d2dd8dfdfea45db201de7a5889a1446c6dff8c7
hash_to_ec b665b2ca8a285e44ba84e785533b56496a5319730dbb95bc14d3bdfece7544dc 8a804cd13457497b0a29eeca2cecfaa858766ec1d270a0e0c6785b43fd49b824
hash_to_ec 43b5cbcc21b3404bca97fa9a661940fe64d40f3ca569310e50b1bb0173c4d5ee 6c12fffb540d536060bb8b96cf635c1b2cbaa4d875a8d2fb0bf79a690363df19
hash_to_ec 11c58f20562c00dec5bb4456be07cd98186837e9af38d50d45f5e7b6f0f9000d cee76b567586f66dadd38c01213bfc1a17d38e96a495efb4c26063dc498ba209
hash_to_ec b069a980b51d8e030262db0b30069e660f4a3f6f8075d1790c153ba12b879f8b 262391b00bdee71d1d827b2cfe50b46c29e265934dc91959bd369aca0cc6444e
hash_to_ec 75274bfd79bf33eb2f9ab046d34528af9a71811e7e3d55c20eb049c81ac692d8 cb93c850e36896fe6626e97c53652af6736ec3ba0641c7765d0cca2bad2352de
hash_to_ec 5cdb6a24d9736a00f197d9707949fedc5405f367744fe8c83b7cff650302b589 8b4ac03123fab9275dcf340345a1b11fba48ef106d410ba2e0e6f6457037a419
hash_to_ec 07fdc85f809f95a07b59b084402bf91c512ebbe05c7657d6ba27a9e7e121e3e2 61182b3def063630e11de648a278032bcb75949f3a24ef5a133da87830ae5c4e
hash_to_ec a4188ca634cbb796f9927822e343d7b267e0a609c1a0ffa4dcf3726b9ffcc8a2 a911e4899fda28fd6337d708d34553ac5e810ee4938f6f7d9d6e521cab069edb
hash_to_ec 3c128ec5c955ea189a5789df2c892e94193a534a9d5801b8f75df870bc492a69 59eef5ee9df0f681df5b5c67ead1f06b059a8a843837b67f20cce15779608170
hash_to_ec 51a4cc7ec4a14a98c0731e9de7f3ce0779123222d95455e940f2014a23729ec8 105863ccda076af7290d1bf9ec828651dc5811159839044d23f1c3e31a11c5e2
hash_to_ec 1b901a31acbb7807c3309facdc7d04bc3b5a4aa714e6e346bd1c6ad4634e6534 01b3c0000b6c6b471c67c6ab3f9c7a500beaea5edb5c8f2b34df91b69ff67f21
hash_to_ec d2f2c8d79cfa2e7cb2db80568ba62ca0576741acfbe5e2baa0d9b3c424a7c84d 7df9d9088022bd1ce6814d6f8051eef27a650ee38e789b184da2691efd27139d
hash_to_ec 04dcb7644fdfc12d8e34d6e57d7769db939b4a149ed2b81aa51a74ee90babe19 6cff0ab2dd3b32ba1bd1a78e3661722f3f10003a01ce83e430970557decedb2c
hash_to_ec 222798c6841eeaa07e7b7e29686942d7c7f9afc38d09360c8e1f52f2b7debd12 133e3a04ec82aa9b8dbbec18cadbafff446d1270bf7c6f3f97ddd3906dae2468
hash_to_ec 4f7277c3ef247a0689b486ad965f969c433fc63e95d7310e789c4708418ccabc 7e0f2c984dd3cffb35458938c95fe92acf2e697aed060b0e3377c7a07e53c494
hash_to_ec 359b4d6709413243ae2c5409ea02714a9f8961bbbb64a91e81daf01e18c981bf eab69af2cb7f113ad6a27035c0399853d10bd0b99291fad37794d100f7530431
hash_to_ec 6cea3c6a9eb38f60329537170aa4db8dbb869af2040061e53b10c267daf6568c da9a97f4fa96bd05dade5e2704a6a633ba4dbe5080a1e831cda888e9d4f86615
hash_to_ec 3dddecb954ef0209bcf61fd5b46b6c94f2384ef281c48a20ffee74f90788172d af9899c31f944617af54712f93d1a2b4944e48867f480d0d1aec61f3b713e32d
hash_to_ec 9605247462f50bdf7ff57fe966abbefe8b6efa0b65b5116252f0ec723717013f fc8f10904d42a74e09310ccf63db31a90f1dab88b278f15e3364a2356810f7e9
hash_to_ec a005143c4d299933f866db41d0a0b8c67264f5d4ea840dd243cb10c3526bc077 928df1fe9404ffa9c1f4a1c8b2d43ab9b81c5615c8330d2dc2074ac66d4d5200
hash_to_ec f45ce88065c34a163f8e77b6fb583502ed0eb1f490f63f76065a9d97e214e3a9 41bd6784270af4154f2f24f118617e2d7f5b7771a409f08b0f2b7bbcb5e3d666
hash_to_ec 7b40ac30ed02b12ff592a5479c80cf5a7673abfdd4dd38810e40e63275bc2eed 6c6bf5961d83851c9728801093d9af04e5a693bc6cbad237b9ac4b0ed580a771
hash_to_ec 9f985005794d3052a63361413a9820d2ce903198d6d5195b3f20a68f146c6d5c 88bcac53ba5b1c5b44730a24b4cc2cd782298fc70dc9d777b577a2b33b256449
hash_to_ec 31b8e37d01fd5669de4ebf78889d749bc44ffe997186ace56f1fb3e60b8742d2 776366b44170efb130a5045597db5675c6c0b56f3def84863c6b6358aa8dcf40

View File

@@ -1,16 +0,0 @@
use std_shims::io::{self, Write};
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {
let mut varint = *varint;
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
if varint != 0 {
b |= VARINT_CONTINUATION_MASK;
}
w.write_all(&[b])?;
varint != 0
} {}
Ok(())
}

View File

@@ -1,321 +0,0 @@
#[cfg(feature = "binaries")]
mod binaries {
pub(crate) use std::sync::Arc;
pub(crate) use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
pub(crate) use multiexp::BatchVerifier;
pub(crate) use serde::Deserialize;
pub(crate) use serde_json::json;
pub(crate) use monero_serai::{
Commitment,
ringct::RctPrunable,
transaction::{Input, Transaction},
block::Block,
rpc::{RpcError, Rpc, HttpRpc},
};
pub(crate) use monero_generators::decompress_point;
pub(crate) use tokio::task::JoinHandle;
pub(crate) async fn check_block(rpc: Arc<Rpc<HttpRpc>>, block_i: usize) {
let hash = loop {
match rpc.get_block_hash(block_i).await {
Ok(hash) => break hash,
Err(RpcError::ConnectionError(e)) => {
println!("get_block_hash ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i}'s hash: {e:?}"),
}
};
// TODO: Grab the JSON to also check it was deserialized correctly
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse = loop {
match rpc.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await {
Ok(res) => break res,
Err(RpcError::ConnectionError(e)) => {
println!("get_block ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i} via block.hash(): {e:?}"),
}
};
let blob = hex::decode(res.blob).expect("node returned non-hex block");
let block = Block::read(&mut blob.as_slice())
.unwrap_or_else(|e| panic!("couldn't deserialize block {block_i}: {e}"));
assert_eq!(block.hash(), hash, "hash differs");
assert_eq!(block.serialize(), blob, "serialization differs");
let txs_len = 1 + block.txs.len();
if !block.txs.is_empty() {
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
let mut hashes_hex = block.txs.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = vec![];
while !hashes_hex.is_empty() {
let txs: TransactionsResponse = loop {
match rpc
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. hashes_hex.len().min(100)).collect::<Vec<_>>(),
})),
)
.await
{
Ok(txs) => break txs,
Err(RpcError::ConnectionError(e)) => {
println!("get_transactions ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't call get_transactions: {e:?}"),
}
};
assert!(txs.missed_tx.is_empty());
all_txs.extend(txs.txs);
}
let mut batch = BatchVerifier::new(block.txs.len());
for (tx_hash, tx_res) in block.txs.into_iter().zip(all_txs) {
assert_eq!(
tx_res.tx_hash,
hex::encode(tx_hash),
"node returned a transaction with different hash"
);
let tx = Transaction::read(
&mut hex::decode(&tx_res.as_hex).expect("node returned non-hex transaction").as_slice(),
)
.expect("couldn't deserialize transaction");
assert_eq!(
hex::encode(tx.serialize()),
tx_res.as_hex,
"Transaction serialization was different"
);
assert_eq!(tx.hash(), tx_hash, "Transaction hash was different");
if matches!(tx.rct_signatures.prunable, RctPrunable::Null) {
assert_eq!(tx.prefix.version, 1);
assert!(!tx.signatures.is_empty());
continue;
}
let sig_hash = tx.signature_hash();
// Verify all proofs we support proving for
// This is due to having debug_asserts calling verify within their proving, and CLSAG
// multisig explicitly calling verify as part of its signing process
// Accordingly, making sure our signature_hash algorithm is correct is great, and further
// making sure the verification functions are valid is appreciated
match tx.rct_signatures.prunable {
RctPrunable::Null |
RctPrunable::AggregateMlsagBorromean { .. } |
RctPrunable::MlsagBorromean { .. } => {}
RctPrunable::MlsagBulletproofs { bulletproofs, .. } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
}
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
for (i, clsag) in clsags.into_iter().enumerate() {
let (amount, key_offsets, image) = match &tx.prefix.inputs[i] {
Input::Gen(_) => panic!("Input::Gen"),
Input::ToKey { amount, key_offsets, key_image } => (amount, key_offsets, key_image),
};
let mut running_sum = 0;
let mut actual_indexes = vec![];
for offset in key_offsets {
running_sum += offset;
actual_indexes.push(running_sum);
}
async fn get_outs(
rpc: &Rpc<HttpRpc>,
amount: u64,
indexes: &[u64],
) -> Vec<[EdwardsPoint; 2]> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = loop {
match rpc
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": amount,
"index": o
})).collect::<Vec<_>>()
})),
)
.await
{
Ok(outs) => break outs,
Err(RpcError::ConnectionError(e)) => {
println!("get_outs ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't connect to RPC to get outs: {e:?}"),
}
};
let rpc_point = |point: &str| {
decompress_point(
hex::decode(point)
.expect("invalid hex for ring member")
.try_into()
.expect("invalid point len for ring member"),
)
.expect("invalid point for ring member")
};
outs
.outs
.iter()
.map(|out| {
let mask = rpc_point(&out.mask);
if amount != 0 {
assert_eq!(mask, Commitment::new(Scalar::from(1u8), amount).calculate());
}
[rpc_point(&out.key), mask]
})
.collect()
}
clsag
.verify(
&get_outs(&rpc, amount.unwrap_or(0), &actual_indexes).await,
image,
&pseudo_outs[i],
&sig_hash,
)
.unwrap();
}
}
}
}
assert!(batch.verify_vartime());
}
println!("Deserialized, hashed, and reserialized {block_i} with {txs_len} TXs");
}
}
#[cfg(feature = "binaries")]
#[tokio::main]
async fn main() {
use binaries::*;
let args = std::env::args().collect::<Vec<String>>();
// Read start block as the first arg
let mut block_i = args[1].parse::<usize>().expect("invalid start block");
// How many blocks to work on at once
let async_parallelism: usize =
args.get(2).unwrap_or(&"8".to_string()).parse::<usize>().expect("invalid parallelism argument");
// Read further args as RPC URLs
let default_nodes = vec![
"http://xmr-node.cakewallet.com:18081".to_string(),
"https://node.sethforprivacy.com".to_string(),
];
let mut specified_nodes = vec![];
{
let mut i = 0;
loop {
let Some(node) = args.get(3 + i) else { break };
specified_nodes.push(node.clone());
i += 1;
}
}
let nodes = if specified_nodes.is_empty() { default_nodes } else { specified_nodes };
let rpc = |url: String| async move {
HttpRpc::new(url.clone())
.await
.unwrap_or_else(|_| panic!("couldn't create HttpRpc connected to {url}"))
};
let main_rpc = rpc(nodes[0].clone()).await;
let mut rpcs = vec![];
for i in 0 .. async_parallelism {
rpcs.push(Arc::new(rpc(nodes[i % nodes.len()].clone()).await));
}
let mut rpc_i = 0;
let mut handles: Vec<JoinHandle<()>> = vec![];
let mut height = 0;
loop {
let new_height = main_rpc.get_height().await.expect("couldn't call get_height");
if new_height == height {
break;
}
height = new_height;
while block_i < height {
if handles.len() >= async_parallelism {
// Guarantee one handle is complete
handles.swap_remove(0).await.unwrap();
// Remove all of the finished handles
let mut i = 0;
while i < handles.len() {
if handles[i].is_finished() {
handles.swap_remove(i).await.unwrap();
continue;
}
i += 1;
}
}
handles.push(tokio::spawn(check_block(rpcs[rpc_i].clone(), block_i)));
rpc_i = (rpc_i + 1) % rpcs.len();
block_i += 1;
}
}
}
#[cfg(not(feature = "binaries"))]
fn main() {
panic!("To run binaries, please build with `--feature binaries`.");
}

View File

@@ -1,130 +0,0 @@
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use crate::{
hash,
merkle::merkle_root,
serialize::*,
transaction::{Input, Transaction},
};
const CORRECT_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("426d16cff04c71f8b16340b722dc4010a2dd3831c22041431f772547ba6e331a");
const EXISTING_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("bbd604d2ba11ba27935e006ed39c9bfdd99b76bf4a50654bc1e1e61217962698");
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BlockHeader {
pub major_version: u8,
pub minor_version: u8,
pub timestamp: u64,
pub previous: [u8; 32],
pub nonce: u32,
}
impl BlockHeader {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.major_version, w)?;
write_varint(&self.minor_version, w)?;
write_varint(&self.timestamp, w)?;
w.write_all(&self.previous)?;
w.write_all(&self.nonce.to_le_bytes())
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<BlockHeader> {
Ok(BlockHeader {
major_version: read_varint(r)?,
minor_version: read_varint(r)?,
timestamp: read_varint(r)?,
previous: read_bytes(r)?,
nonce: read_bytes(r).map(u32::from_le_bytes)?,
})
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Block {
pub header: BlockHeader,
pub miner_tx: Transaction,
pub txs: Vec<[u8; 32]>,
}
impl Block {
pub fn number(&self) -> Option<u64> {
match self.miner_tx.prefix.inputs.first() {
Some(Input::Gen(number)) => Some(*number),
_ => None,
}
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.header.write(w)?;
self.miner_tx.write(w)?;
write_varint(&self.txs.len(), w)?;
for tx in &self.txs {
w.write_all(tx)?;
}
Ok(())
}
fn tx_merkle_root(&self) -> [u8; 32] {
merkle_root(self.miner_tx.hash(), &self.txs)
}
/// Serialize the block as required for the proof of work hash.
///
/// This is distinct from the serialization required for the block hash. To get the block hash,
/// use the [`Block::hash`] function.
pub fn serialize_hashable(&self) -> Vec<u8> {
let mut blob = self.header.serialize();
blob.extend_from_slice(&self.tx_merkle_root());
write_varint(&(1 + u64::try_from(self.txs.len()).unwrap()), &mut blob).unwrap();
blob
}
pub fn hash(&self) -> [u8; 32] {
let mut hashable = self.serialize_hashable();
// Monero pre-appends a VarInt of the block hashing blobs length before getting the block hash
// but doesn't do this when getting the proof of work hash :)
let mut hashing_blob = Vec::with_capacity(8 + hashable.len());
write_varint(&u64::try_from(hashable.len()).unwrap(), &mut hashing_blob).unwrap();
hashing_blob.append(&mut hashable);
let hash = hash(&hashing_blob);
if hash == CORRECT_BLOCK_HASH_202612 {
return EXISTING_BLOCK_HASH_202612;
};
hash
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Block> {
let header = BlockHeader::read(r)?;
let miner_tx = Transaction::read(r)?;
if !matches!(miner_tx.prefix.inputs.as_slice(), &[Input::Gen(_)]) {
Err(io::Error::other("Miner transaction has incorrect input type."))?;
}
Ok(Block {
header,
miner_tx,
txs: (0_usize .. read_varint(r)?).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
})
}
}

View File

@@ -1,229 +0,0 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
#[macro_use]
extern crate alloc;
use std_shims::{sync::OnceLock, io};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use sha3::{Digest, Keccak256};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub use monero_generators::{H, decompress_point};
mod merkle;
mod serialize;
use serialize::{read_byte, read_u16};
/// UnreducedScalar struct with functionality for recovering incorrectly reduced scalars.
mod unreduced_scalar;
/// Ring Signature structs and functionality.
pub mod ring_signatures;
/// RingCT structs and functionality.
pub mod ringct;
use ringct::RctType;
/// Transaction structs.
pub mod transaction;
/// Block structs.
pub mod block;
/// Monero daemon RPC interface.
pub mod rpc;
/// Wallet functionality, enabling scanning and sending transactions.
pub mod wallet;
#[cfg(test)]
mod tests;
pub const DEFAULT_LOCK_WINDOW: usize = 10;
pub const COINBASE_LOCK_WINDOW: usize = 60;
pub const BLOCK_TIME: usize = 120;
static INV_EIGHT_CELL: OnceLock<Scalar> = OnceLock::new();
#[allow(non_snake_case)]
pub(crate) fn INV_EIGHT() -> Scalar {
*INV_EIGHT_CELL.get_or_init(|| Scalar::from(8u8).invert())
}
/// Monero protocol version.
///
/// v15 is omitted as v15 was simply v14 and v16 being active at the same time, with regards to the
/// transactions supported. Accordingly, v16 should be used during v15.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[allow(non_camel_case_types)]
pub enum Protocol {
v14,
v16,
Custom {
ring_len: usize,
bp_plus: bool,
optimal_rct_type: RctType,
view_tags: bool,
v16_fee: bool,
},
}
impl Protocol {
/// Amount of ring members under this protocol version.
pub fn ring_len(&self) -> usize {
match self {
Protocol::v14 => 11,
Protocol::v16 => 16,
Protocol::Custom { ring_len, .. } => *ring_len,
}
}
/// Whether or not the specified version uses Bulletproofs or Bulletproofs+.
///
/// This method will likely be reworked when versions not using Bulletproofs at all are added.
pub fn bp_plus(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { bp_plus, .. } => *bp_plus,
}
}
// TODO: Make this an Option when we support pre-RCT protocols
pub fn optimal_rct_type(&self) -> RctType {
match self {
Protocol::v14 => RctType::Clsag,
Protocol::v16 => RctType::BulletproofsPlus,
Protocol::Custom { optimal_rct_type, .. } => *optimal_rct_type,
}
}
/// Whether or not the specified version uses view tags.
pub fn view_tags(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { view_tags, .. } => *view_tags,
}
}
/// Whether or not the specified version uses the fee algorithm from Monero
/// hard fork version 16 (released in v18 binaries).
pub fn v16_fee(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { v16_fee, .. } => *v16_fee,
}
}
pub(crate) fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Protocol::v14 => w.write_all(&[0, 14]),
Protocol::v16 => w.write_all(&[0, 16]),
Protocol::Custom { ring_len, bp_plus, optimal_rct_type, view_tags, v16_fee } => {
// Custom, version 0
w.write_all(&[1, 0])?;
w.write_all(&u16::try_from(*ring_len).unwrap().to_le_bytes())?;
w.write_all(&[u8::from(*bp_plus)])?;
w.write_all(&[optimal_rct_type.to_byte()])?;
w.write_all(&[u8::from(*view_tags)])?;
w.write_all(&[u8::from(*v16_fee)])
}
}
}
pub(crate) fn read<R: io::Read>(r: &mut R) -> io::Result<Protocol> {
Ok(match read_byte(r)? {
// Monero protocol
0 => match read_byte(r)? {
14 => Protocol::v14,
16 => Protocol::v16,
_ => Err(io::Error::other("unrecognized monero protocol"))?,
},
// Custom
1 => match read_byte(r)? {
0 => Protocol::Custom {
ring_len: read_u16(r)?.into(),
bp_plus: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
optimal_rct_type: RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::other("invalid RctType serialization"))?,
view_tags: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
v16_fee: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
},
_ => Err(io::Error::other("unrecognized custom protocol serialization"))?,
},
_ => Err(io::Error::other("unrecognized protocol serialization"))?,
})
}
}
/// Transparent structure representing a Pedersen commitment's contents.
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub struct Commitment {
pub mask: Scalar,
pub amount: u64,
}
impl core::fmt::Debug for Commitment {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt.debug_struct("Commitment").field("amount", &self.amount).finish_non_exhaustive()
}
}
impl Commitment {
/// A commitment to zero, defined with a mask of 1 (as to not be the identity).
pub fn zero() -> Commitment {
Commitment { mask: Scalar::ONE, amount: 0 }
}
pub fn new(mask: Scalar, amount: u64) -> Commitment {
Commitment { mask, amount }
}
/// Calculate a Pedersen commitment, as a point, from the transparent structure.
pub fn calculate(&self) -> EdwardsPoint {
(&self.mask * ED25519_BASEPOINT_TABLE) + (Scalar::from(self.amount) * H())
}
}
/// Support generating a random scalar using a modern rand, as dalek's is notoriously dated.
pub fn random_scalar<R: RngCore + CryptoRng>(rng: &mut R) -> Scalar {
let mut r = [0; 64];
rng.fill_bytes(&mut r);
Scalar::from_bytes_mod_order_wide(&r)
}
pub(crate) fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
/// Hash the provided data to a scalar via keccak256(data) % l.
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
let scalar = Scalar::from_bytes_mod_order(hash(data));
// Monero will explicitly error in this case
// This library acknowledges its practical impossibility of it occurring, and doesn't bother to
// code in logic to handle it. That said, if it ever occurs, something must happen in order to
// not generate/verify a proof we believe to be valid when it isn't
assert!(scalar != Scalar::ZERO, "ZERO HASH: {data:?}");
scalar
}

View File

@@ -1,55 +0,0 @@
use std_shims::vec::Vec;
use crate::hash;
pub(crate) fn merkle_root(root: [u8; 32], leafs: &[[u8; 32]]) -> [u8; 32] {
match leafs.len() {
0 => root,
1 => hash(&[root, leafs[0]].concat()),
_ => {
let mut hashes = Vec::with_capacity(1 + leafs.len());
hashes.push(root);
hashes.extend(leafs);
// Monero preprocess this so the length is a power of 2
let mut high_pow_2 = 4; // 4 is the lowest value this can be
while high_pow_2 < hashes.len() {
high_pow_2 *= 2;
}
let low_pow_2 = high_pow_2 / 2;
// Merge right-most hashes until we're at the low_pow_2
{
let overage = hashes.len() - low_pow_2;
let mut rightmost = hashes.drain((low_pow_2 - overage) ..);
// This is true since we took overage from beneath and above low_pow_2, taking twice as
// many elements as overage
debug_assert_eq!(rightmost.len() % 2, 0);
let mut paired_hashes = Vec::with_capacity(overage);
while let Some(left) = rightmost.next() {
let right = rightmost.next().unwrap();
paired_hashes.push(hash(&[left.as_ref(), &right].concat()));
}
drop(rightmost);
hashes.extend(paired_hashes);
assert_eq!(hashes.len(), low_pow_2);
}
// Do a traditional pairing off
let mut new_hashes = Vec::with_capacity(hashes.len() / 2);
while hashes.len() > 1 {
let mut i = 0;
while i < hashes.len() {
new_hashes.push(hash(&[hashes[i], hashes[i + 1]].concat()));
i += 2;
}
hashes = new_hashes;
new_hashes = Vec::with_capacity(hashes.len() / 2);
}
hashes[0]
}
}
}

View File

@@ -1,72 +0,0 @@
use std_shims::{
io::{self, *},
vec::Vec,
};
use zeroize::Zeroize;
use curve25519_dalek::{EdwardsPoint, Scalar};
use monero_generators::hash_to_point;
use crate::{serialize::*, hash_to_scalar};
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Signature {
c: Scalar,
r: Scalar,
}
impl Signature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_scalar(&self.c, w)?;
write_scalar(&self.r, w)?;
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Signature> {
Ok(Signature { c: read_scalar(r)?, r: read_scalar(r)? })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct RingSignature {
sigs: Vec<Signature>,
}
impl RingSignature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for sig in &self.sigs {
sig.write(w)?;
}
Ok(())
}
pub fn read<R: Read>(members: usize, r: &mut R) -> io::Result<RingSignature> {
Ok(RingSignature { sigs: read_raw_vec(Signature::read, members, r)? })
}
pub fn verify(&self, msg: &[u8; 32], ring: &[EdwardsPoint], key_image: &EdwardsPoint) -> bool {
if ring.len() != self.sigs.len() {
return false;
}
let mut buf = Vec::with_capacity(32 + (32 * 2 * ring.len()));
buf.extend_from_slice(msg);
let mut sum = Scalar::ZERO;
for (ring_member, sig) in ring.iter().zip(&self.sigs) {
#[allow(non_snake_case)]
let Li = EdwardsPoint::vartime_double_scalar_mul_basepoint(&sig.c, ring_member, &sig.r);
buf.extend_from_slice(Li.compress().as_bytes());
#[allow(non_snake_case)]
let Ri = (sig.r * hash_to_point(ring_member.compress().to_bytes())) + (sig.c * key_image);
buf.extend_from_slice(Ri.compress().as_bytes());
sum += sig.c;
}
sum == hash_to_scalar(&buf)
}
}

View File

@@ -1,97 +0,0 @@
use core::fmt::Debug;
use std_shims::io::{self, Read, Write};
use curve25519_dalek::{traits::Identity, Scalar, EdwardsPoint};
use monero_generators::H_pow_2;
use crate::{hash_to_scalar, unreduced_scalar::UnreducedScalar, serialize::*};
/// 64 Borromean ring signatures.
///
/// s0 and s1 are stored as `UnreducedScalar`s due to Monero not requiring they were reduced.
/// `UnreducedScalar` preserves their original byte encoding and implements a custom reduction
/// algorithm which was in use.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanSignatures {
pub s0: [UnreducedScalar; 64],
pub s1: [UnreducedScalar; 64],
pub ee: Scalar,
}
impl BorromeanSignatures {
pub fn read<R: Read>(r: &mut R) -> io::Result<BorromeanSignatures> {
Ok(BorromeanSignatures {
s0: read_array(UnreducedScalar::read, r)?,
s1: read_array(UnreducedScalar::read, r)?,
ee: read_scalar(r)?,
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for s0 in &self.s0 {
s0.write(w)?;
}
for s1 in &self.s1 {
s1.write(w)?;
}
write_scalar(&self.ee, w)
}
fn verify(&self, keys_a: &[EdwardsPoint], keys_b: &[EdwardsPoint]) -> bool {
let mut transcript = [0; 2048];
for i in 0 .. 64 {
#[allow(non_snake_case)]
let LL = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&self.ee,
&keys_a[i],
&self.s0[i].recover_monero_slide_scalar(),
);
#[allow(non_snake_case)]
let LV = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&hash_to_scalar(LL.compress().as_bytes()),
&keys_b[i],
&self.s1[i].recover_monero_slide_scalar(),
);
transcript[(i * 32) .. ((i + 1) * 32)].copy_from_slice(LV.compress().as_bytes());
}
hash_to_scalar(&transcript) == self.ee
}
}
/// A range proof premised on Borromean ring signatures.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanRange {
pub sigs: BorromeanSignatures,
pub bit_commitments: [EdwardsPoint; 64],
}
impl BorromeanRange {
pub fn read<R: Read>(r: &mut R) -> io::Result<BorromeanRange> {
Ok(BorromeanRange {
sigs: BorromeanSignatures::read(r)?,
bit_commitments: read_array(read_point, r)?,
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.sigs.write(w)?;
write_raw_vec(write_point, &self.bit_commitments, w)
}
pub fn verify(&self, commitment: &EdwardsPoint) -> bool {
if &self.bit_commitments.iter().sum::<EdwardsPoint>() != commitment {
return false;
}
#[allow(non_snake_case)]
let H_pow_2 = H_pow_2();
let mut commitments_sub_one = [EdwardsPoint::identity(); 64];
for i in 0 .. 64 {
commitments_sub_one[i] = self.bit_commitments[i] - H_pow_2[i];
}
self.sigs.verify(&self.bit_commitments, &commitments_sub_one)
}
}

View File

@@ -1,151 +0,0 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use subtle::{Choice, ConditionallySelectable};
use curve25519_dalek::edwards::EdwardsPoint as DalekPoint;
use group::{ff::Field, Group};
use dalek_ff_group::{Scalar, EdwardsPoint};
use multiexp::multiexp as multiexp_const;
pub(crate) use monero_generators::Generators;
use crate::{INV_EIGHT as DALEK_INV_EIGHT, H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::*;
#[inline]
pub(crate) fn INV_EIGHT() -> Scalar {
Scalar(DALEK_INV_EIGHT())
}
#[inline]
pub(crate) fn H() -> EdwardsPoint {
EdwardsPoint(DALEK_H())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
// Components common between variants
pub(crate) const MAX_M: usize = 16;
pub(crate) const LOG_N: usize = 6; // 2 << 6 == N
pub(crate) const N: usize = 64;
pub(crate) fn prove_multiexp(pairs: &[(Scalar, EdwardsPoint)]) -> EdwardsPoint {
multiexp_const(pairs) * INV_EIGHT()
}
pub(crate) fn vector_exponent(
generators: &Generators,
a: &ScalarVector,
b: &ScalarVector,
) -> EdwardsPoint {
debug_assert_eq!(a.len(), b.len());
(a * &generators.G[.. a.len()]) + (b * &generators.H[.. b.len()])
}
pub(crate) fn hash_cache(cache: &mut Scalar, mash: &[[u8; 32]]) -> Scalar {
let slice =
&[cache.to_bytes().as_ref(), mash.iter().copied().flatten().collect::<Vec<_>>().as_ref()]
.concat();
*cache = hash_to_scalar(slice);
*cache
}
pub(crate) fn MN(outputs: usize) -> (usize, usize, usize) {
let mut logM = 0;
let mut M;
while {
M = 1 << logM;
(M <= MAX_M) && (M < outputs)
} {
logM += 1;
}
(logM + LOG_N, M, M * N)
}
pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, ScalarVector) {
let (_, M, MN) = MN(commitments.len());
let sv = commitments.iter().map(|c| Scalar::from(c.amount)).collect::<Vec<_>>();
let mut aL = ScalarVector::new(MN);
let mut aR = ScalarVector::new(MN);
for j in 0 .. M {
for i in (0 .. N).rev() {
let bit =
if j < sv.len() { Choice::from((sv[j][i / 8] >> (i % 8)) & 1) } else { Choice::from(0) };
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::ZERO, &Scalar::ONE, bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::ONE, &Scalar::ZERO, bit);
}
}
(aL, aR)
}
pub(crate) fn hash_commitments<C: IntoIterator<Item = DalekPoint>>(
commitments: C,
) -> (Scalar, Vec<EdwardsPoint>) {
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * INV_EIGHT()).collect::<Vec<_>>();
(hash_to_scalar(&V.iter().flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>()), V)
}
pub(crate) fn alpha_rho<R: RngCore + CryptoRng>(
rng: &mut R,
generators: &Generators,
aL: &ScalarVector,
aR: &ScalarVector,
) -> (Scalar, EdwardsPoint) {
let ar = Scalar::random(rng);
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * INV_EIGHT())
}
pub(crate) fn LR_statements(
a: &ScalarVector,
G_i: &[EdwardsPoint],
b: &ScalarVector,
H_i: &[EdwardsPoint],
cL: Scalar,
U: EdwardsPoint,
) -> Vec<(Scalar, EdwardsPoint)> {
let mut res = a
.0
.iter()
.copied()
.zip(G_i.iter().copied())
.chain(b.0.iter().copied().zip(H_i.iter().copied()))
.collect::<Vec<_>>();
res.push((cL, U));
res
}
static TWO_N_CELL: OnceLock<ScalarVector> = OnceLock::new();
pub(crate) fn TWO_N() -> &'static ScalarVector {
TWO_N_CELL.get_or_init(|| ScalarVector::powers(Scalar::from(2u8), N))
}
pub(crate) fn challenge_products(w: &[Scalar], winv: &[Scalar]) -> Vec<Scalar> {
let mut products = vec![Scalar::ZERO; 1 << w.len()];
products[0] = winv[0];
products[1] = w[0];
for j in 1 .. w.len() {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * w[j];
products[slots - 1] = products[slots / 2] * winv[j];
slots = slots.saturating_sub(2);
}
}
// Sanity check as if the above failed to populate, it'd be critical
for w in &products {
debug_assert!(!bool::from(w.is_zero()));
}
products
}

View File

@@ -1,229 +0,0 @@
#![allow(non_snake_case)]
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::edwards::EdwardsPoint;
use multiexp::BatchVerifier;
use crate::{Commitment, wallet::TransactionError, serialize::*};
pub(crate) mod scalar_vector;
pub(crate) mod core;
use self::core::LOG_N;
pub(crate) mod original;
use self::original::OriginalStruct;
pub(crate) mod plus;
use self::plus::*;
pub(crate) const MAX_OUTPUTS: usize = self::core::MAX_M;
/// Bulletproofs enum, supporting the original and plus formulations.
#[allow(clippy::large_enum_variant)]
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Bulletproofs {
Original(OriginalStruct),
Plus(AggregateRangeProof),
}
impl Bulletproofs {
fn bp_fields(plus: bool) -> usize {
if plus {
6
} else {
9
}
}
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L106-L124
pub(crate) fn calculate_bp_clawback(plus: bool, n_outputs: usize) -> (usize, usize) {
#[allow(non_snake_case)]
let mut LR_len = 0;
let mut n_padded_outputs = 1;
while n_padded_outputs < n_outputs {
LR_len += 1;
n_padded_outputs = 1 << LR_len;
}
LR_len += LOG_N;
let mut bp_clawback = 0;
if n_padded_outputs > 2 {
let fields = Bulletproofs::bp_fields(plus);
let base = ((fields + (2 * (LOG_N + 1))) * 32) / 2;
let size = (fields + (2 * LR_len)) * 32;
bp_clawback = ((base * n_padded_outputs) - size) * 4 / 5;
}
(bp_clawback, LR_len)
}
pub(crate) fn fee_weight(plus: bool, outputs: usize) -> usize {
#[allow(non_snake_case)]
let (bp_clawback, LR_len) = Bulletproofs::calculate_bp_clawback(plus, outputs);
32 * (Bulletproofs::bp_fields(plus) + (2 * LR_len)) + 2 + bp_clawback
}
/// Prove the list of commitments are within [0 .. 2^64).
pub fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
outputs: &[Commitment],
plus: bool,
) -> Result<Bulletproofs, TransactionError> {
if outputs.is_empty() {
Err(TransactionError::NoOutputs)?;
}
if outputs.len() > MAX_OUTPUTS {
Err(TransactionError::TooManyOutputs)?;
}
Ok(if !plus {
Bulletproofs::Original(OriginalStruct::prove(rng, outputs))
} else {
use dalek_ff_group::EdwardsPoint as DfgPoint;
Bulletproofs::Plus(
AggregateRangeStatement::new(outputs.iter().map(|com| DfgPoint(com.calculate())).collect())
.unwrap()
.prove(rng, &Zeroizing::new(AggregateRangeWitness::new(outputs).unwrap()))
.unwrap(),
)
})
}
/// Verify the given Bulletproofs.
#[must_use]
pub fn verify<R: RngCore + CryptoRng>(&self, rng: &mut R, commitments: &[EdwardsPoint]) -> bool {
match self {
Bulletproofs::Original(bp) => bp.verify(rng, commitments),
Bulletproofs::Plus(bp) => {
let mut verifier = BatchVerifier::new(1);
// If this commitment is torsioned (which is allowed), this won't be a well-formed
// dfg::EdwardsPoint (expected to be of prime-order)
// The actual BP+ impl will perform a torsion clear though, making this safe
// TODO: Have AggregateRangeStatement take in dalek EdwardsPoint for clarity on this
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
if !statement.verify(rng, &mut verifier, (), bp.clone()) {
return false;
}
verifier.verify_vartime()
}
}
}
/// Accumulate the verification for the given Bulletproofs into the specified BatchVerifier.
/// Returns false if the Bulletproofs aren't sane, without mutating the BatchVerifier.
/// Returns true if the Bulletproofs are sane, regardless of their validity.
#[must_use]
pub fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, dalek_ff_group::EdwardsPoint>,
id: ID,
commitments: &[EdwardsPoint],
) -> bool {
match self {
Bulletproofs::Original(bp) => bp.batch_verify(rng, verifier, id, commitments),
Bulletproofs::Plus(bp) => {
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
statement.verify(rng, verifier, id, bp.clone())
}
}
}
fn write_core<W: Write, F: Fn(&[EdwardsPoint], &mut W) -> io::Result<()>>(
&self,
w: &mut W,
specific_write_vec: F,
) -> io::Result<()> {
match self {
Bulletproofs::Original(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.S, w)?;
write_point(&bp.T1, w)?;
write_point(&bp.T2, w)?;
write_scalar(&bp.taux, w)?;
write_scalar(&bp.mu, w)?;
specific_write_vec(&bp.L, w)?;
specific_write_vec(&bp.R, w)?;
write_scalar(&bp.a, w)?;
write_scalar(&bp.b, w)?;
write_scalar(&bp.t, w)
}
Bulletproofs::Plus(bp) => {
write_point(&bp.A.0, w)?;
write_point(&bp.wip.A.0, w)?;
write_point(&bp.wip.B.0, w)?;
write_scalar(&bp.wip.r_answer.0, w)?;
write_scalar(&bp.wip.s_answer.0, w)?;
write_scalar(&bp.wip.delta_answer.0, w)?;
specific_write_vec(&bp.wip.L.iter().copied().map(|L| L.0).collect::<Vec<_>>(), w)?;
specific_write_vec(&bp.wip.R.iter().copied().map(|R| R.0).collect::<Vec<_>>(), w)
}
}
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_raw_vec(write_point, points, w))
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_vec(write_point, points, w))
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
/// Read Bulletproofs.
pub fn read<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
Ok(Bulletproofs::Original(OriginalStruct {
A: read_point(r)?,
S: read_point(r)?,
T1: read_point(r)?,
T2: read_point(r)?,
taux: read_scalar(r)?,
mu: read_scalar(r)?,
L: read_vec(read_point, r)?,
R: read_vec(read_point, r)?,
a: read_scalar(r)?,
b: read_scalar(r)?,
t: read_scalar(r)?,
}))
}
/// Read Bulletproofs+.
pub fn read_plus<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
use dalek_ff_group::{Scalar as DfgScalar, EdwardsPoint as DfgPoint};
Ok(Bulletproofs::Plus(AggregateRangeProof {
A: DfgPoint(read_point(r)?),
wip: WipProof {
A: DfgPoint(read_point(r)?),
B: DfgPoint(read_point(r)?),
r_answer: DfgScalar(read_scalar(r)?),
s_answer: DfgScalar(read_scalar(r)?),
delta_answer: DfgScalar(read_scalar(r)?),
L: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
R: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
},
}))
}
}

View File

@@ -1,322 +0,0 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
use curve25519_dalek::{scalar::Scalar as DalekScalar, edwards::EdwardsPoint as DalekPoint};
use group::{ff::Field, Group};
use dalek_ff_group::{ED25519_BASEPOINT_POINT as G, Scalar, EdwardsPoint};
use multiexp::{BatchVerifier, multiexp};
use crate::{Commitment, ringct::bulletproofs::core::*};
include!(concat!(env!("OUT_DIR"), "/generators.rs"));
static IP12_CELL: OnceLock<Scalar> = OnceLock::new();
pub(crate) fn IP12() -> Scalar {
*IP12_CELL.get_or_init(|| ScalarVector(vec![Scalar::ONE; N]).inner_product(TWO_N()))
}
pub(crate) fn hadamard_fold(
l: &[EdwardsPoint],
r: &[EdwardsPoint],
a: Scalar,
b: Scalar,
) -> Vec<EdwardsPoint> {
let mut res = Vec::with_capacity(l.len() / 2);
for i in 0 .. l.len() {
res.push(multiexp(&[(a, l[i]), (b, r[i])]));
}
res
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct OriginalStruct {
pub(crate) A: DalekPoint,
pub(crate) S: DalekPoint,
pub(crate) T1: DalekPoint,
pub(crate) T2: DalekPoint,
pub(crate) taux: DalekScalar,
pub(crate) mu: DalekScalar,
pub(crate) L: Vec<DalekPoint>,
pub(crate) R: Vec<DalekPoint>,
pub(crate) a: DalekScalar,
pub(crate) b: DalekScalar,
pub(crate) t: DalekScalar,
}
impl OriginalStruct {
pub(crate) fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
commitments: &[Commitment],
) -> OriginalStruct {
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
let commitments_points = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
let (mut cache, _) = hash_commitments(commitments_points.clone());
let (sL, sR) =
ScalarVector((0 .. (MN * 2)).map(|_| Scalar::random(&mut *rng)).collect::<Vec<_>>()).split();
let generators = GENERATORS();
let (mut alpha, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, generators, &sL, &sR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes(), S.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
let z = cache;
let l0 = aL - z;
let l1 = sL;
let mut zero_twos = Vec::with_capacity(MN);
let zpow = ScalarVector::powers(z, M + 2);
for j in 0 .. M {
for i in 0 .. N {
zero_twos.push(zpow[j + 2] * TWO_N()[i]);
}
}
let yMN = ScalarVector::powers(y, MN);
let r0 = ((aR + z) * &yMN) + &ScalarVector(zero_twos);
let r1 = yMN * &sR;
let (T1, T2, x, mut taux) = {
let t1 = l0.clone().inner_product(&r1) + r0.clone().inner_product(&l1);
let t2 = l1.clone().inner_product(&r1);
let mut tau1 = Scalar::random(&mut *rng);
let mut tau2 = Scalar::random(&mut *rng);
let T1 = prove_multiexp(&[(t1, H()), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, H()), (tau2, EdwardsPoint::generator())]);
let x =
hash_cache(&mut cache, &[z.to_bytes(), T1.compress().to_bytes(), T2.compress().to_bytes()]);
let taux = (tau2 * (x * x)) + (tau1 * x);
tau1.zeroize();
tau2.zeroize();
(T1, T2, x, taux)
};
let mu = (x * rho) + alpha;
alpha.zeroize();
rho.zeroize();
for (i, gamma) in commitments.iter().map(|c| Scalar(c.mask)).enumerate() {
taux += zpow[i + 2] * gamma;
}
let l = l0 + &(l1 * x);
let r = r0 + &(r1 * x);
let t = l.clone().inner_product(&r);
let x_ip =
hash_cache(&mut cache, &[x.to_bytes(), taux.to_bytes(), mu.to_bytes(), t.to_bytes()]);
let mut a = l;
let mut b = r;
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
H_proof.iter_mut().zip(yinvpow.0.iter()).for_each(|(this_H, yinvpow)| *this_H *= yinvpow);
let U = H() * x_ip;
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
while a.len() != 1 {
let (aL, aR) = a.split();
let (bL, bR) = b.split();
let cL = aL.clone().inner_product(&bR);
let cR = aR.clone().inner_product(&bL);
let (G_L, G_R) = G_proof.split_at(aL.len());
let (H_L, H_R) = H_proof.split_at(aL.len());
let L_i = prove_multiexp(&LR_statements(&aL, G_R, &bR, H_L, cL, U));
let R_i = prove_multiexp(&LR_statements(&aR, G_L, &bL, H_R, cR, U));
L.push(L_i);
R.push(R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
a = (aL * w) + &(aR * winv);
b = (bL * winv) + &(bR * w);
if a.len() != 1 {
G_proof = hadamard_fold(G_L, G_R, winv, w);
H_proof = hadamard_fold(H_L, H_R, w, winv);
}
}
let res = OriginalStruct {
A: *A,
S: *S,
T1: *T1,
T2: *T2,
taux: *taux,
mu: *mu,
L: L.drain(..).map(|L| *L).collect(),
R: R.drain(..).map(|R| *R).collect(),
a: *a[0],
b: *b[0],
t: *t,
};
debug_assert!(res.verify(rng, &commitments_points));
res
}
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
// Verify commitments are valid
if commitments.is_empty() || (commitments.len() > MAX_M) {
return false;
}
// Verify L and R are properly sized
if self.L.len() != self.R.len() {
return false;
}
let (logMN, M, MN) = MN(commitments.len());
if self.L.len() != logMN {
return false;
}
// Rebuild all challenges
let (mut cache, commitments) = hash_commitments(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes(), self.S.compress().to_bytes()]);
let z = hash_to_scalar(&y.to_bytes());
cache = z;
let x = hash_cache(
&mut cache,
&[z.to_bytes(), self.T1.compress().to_bytes(), self.T2.compress().to_bytes()],
);
let x_ip = hash_cache(
&mut cache,
&[x.to_bytes(), self.taux.to_bytes(), self.mu.to_bytes(), self.t.to_bytes()],
);
let mut w = Vec::with_capacity(logMN);
let mut winv = Vec::with_capacity(logMN);
for (L, R) in self.L.iter().zip(&self.R) {
w.push(hash_cache(&mut cache, &[L.compress().to_bytes(), R.compress().to_bytes()]));
winv.push(cache.invert().unwrap());
}
// Convert the proof from * INV_EIGHT to its actual form
let normalize = |point: &DalekPoint| EdwardsPoint(point.mul_by_cofactor());
let L = self.L.iter().map(normalize).collect::<Vec<_>>();
let R = self.R.iter().map(normalize).collect::<Vec<_>>();
let T1 = normalize(&self.T1);
let T2 = normalize(&self.T2);
let A = normalize(&self.A);
let S = normalize(&self.S);
let commitments = commitments.iter().map(EdwardsPoint::mul_by_cofactor).collect::<Vec<_>>();
// Verify it
let mut proof = Vec::with_capacity(4 + commitments.len());
let zpow = ScalarVector::powers(z, M + 3);
let ip1y = ScalarVector::powers(y, M * N).sum();
let mut k = -(zpow[2] * ip1y);
for j in 1 ..= M {
k -= zpow[j + 2] * IP12();
}
let y1 = Scalar(self.t) - ((z * ip1y) + k);
proof.push((-y1, H()));
proof.push((-Scalar(self.taux), G));
for (j, commitment) in commitments.iter().enumerate() {
proof.push((zpow[j + 2], *commitment));
}
proof.push((x, T1));
proof.push((x * x, T2));
verifier.queue(&mut *rng, id, proof);
proof = Vec::with_capacity(4 + (2 * (MN + logMN)));
let z3 = (Scalar(self.t) - (Scalar(self.a) * Scalar(self.b))) * x_ip;
proof.push((z3, H()));
proof.push((-Scalar(self.mu), G));
proof.push((Scalar::ONE, A));
proof.push((x, S));
{
let ypow = ScalarVector::powers(y, MN);
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let w_cache = challenge_products(&w, &winv);
let generators = GENERATORS();
for i in 0 .. MN {
let g = (Scalar(self.a) * w_cache[i]) + z;
proof.push((-g, generators.G[i]));
let mut h = Scalar(self.b) * yinvpow[i] * w_cache[(!i) & (MN - 1)];
h -= ((zpow[(i / N) + 2] * TWO_N()[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, generators.H[i]));
}
}
for i in 0 .. logMN {
proof.push((w[i] * w[i], L[i]));
proof.push((winv[i] * winv[i], R[i]));
}
verifier.queue(rng, id, proof);
true
}
#[must_use]
pub(crate) fn verify<R: RngCore + CryptoRng>(
&self,
rng: &mut R,
commitments: &[DalekPoint],
) -> bool {
let mut verifier = BatchVerifier::new(1);
if self.verify_core(rng, &mut verifier, (), commitments) {
verifier.verify_vartime()
} else {
false
}
}
#[must_use]
pub(crate) fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
self.verify_core(rng, verifier, id, commitments)
}
}

View File

@@ -1,257 +0,0 @@
use std_shims::vec::Vec;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use multiexp::{multiexp, multiexp_vartime, BatchVerifier};
use group::{
ff::{Field, PrimeField},
Group, GroupEncoding,
};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::{
Commitment,
ringct::{
bulletproofs::core::{MAX_M, N},
bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators,
transcript::*,
weighted_inner_product::{WipStatement, WipWitness, WipProof},
padded_pow_of_2, u64_decompose,
},
},
};
// Figure 3
#[derive(Clone, Debug)]
pub(crate) struct AggregateRangeStatement {
generators: Generators,
V: Vec<EdwardsPoint>,
}
impl Zeroize for AggregateRangeStatement {
fn zeroize(&mut self) {
self.V.zeroize();
}
}
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct AggregateRangeWitness {
values: Vec<u64>,
gammas: Vec<Scalar>,
}
impl AggregateRangeWitness {
pub(crate) fn new(commitments: &[Commitment]) -> Option<Self> {
if commitments.is_empty() || (commitments.len() > MAX_M) {
return None;
}
let mut values = Vec::with_capacity(commitments.len());
let mut gammas = Vec::with_capacity(commitments.len());
for commitment in commitments {
values.push(commitment.amount);
gammas.push(Scalar(commitment.mask));
}
Some(AggregateRangeWitness { values, gammas })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct AggregateRangeProof {
pub(crate) A: EdwardsPoint,
pub(crate) wip: WipProof,
}
impl AggregateRangeStatement {
pub(crate) fn new(V: Vec<EdwardsPoint>) -> Option<Self> {
if V.is_empty() || (V.len() > MAX_M) {
return None;
}
Some(Self { generators: Generators::new(), V })
}
fn transcript_A(transcript: &mut Scalar, A: EdwardsPoint) -> (Scalar, Scalar) {
let y = hash_to_scalar(&[transcript.to_repr().as_ref(), A.to_bytes().as_ref()].concat());
let z = hash_to_scalar(y.to_bytes().as_ref());
*transcript = z;
(y, z)
}
fn d_j(j: usize, m: usize) -> ScalarVector {
let mut d_j = Vec::with_capacity(m * N);
for _ in 0 .. (j - 1) * N {
d_j.push(Scalar::ZERO);
}
d_j.append(&mut ScalarVector::powers(Scalar::from(2u8), N).0);
for _ in 0 .. (m - j) * N {
d_j.push(Scalar::ZERO);
}
ScalarVector(d_j)
}
fn compute_A_hat(
mut V: PointVector,
generators: &Generators,
transcript: &mut Scalar,
mut A: EdwardsPoint,
) -> (Scalar, ScalarVector, Scalar, Scalar, ScalarVector, EdwardsPoint) {
let (y, z) = Self::transcript_A(transcript, A);
A = A.mul_by_cofactor();
while V.len() < padded_pow_of_2(V.len()) {
V.0.push(EdwardsPoint::identity());
}
let mn = V.len() * N;
let mut z_pow = Vec::with_capacity(V.len());
let mut d = ScalarVector::new(mn);
for j in 1 ..= V.len() {
z_pow.push(z.pow(Scalar::from(2 * u64::try_from(j).unwrap()))); // TODO: Optimize this
d = d + &(Self::d_j(j, V.len()) * (z_pow[j - 1]));
}
let mut ascending_y = ScalarVector(vec![y]);
for i in 1 .. d.len() {
ascending_y.0.push(ascending_y[i - 1] * y);
}
let y_pows = ascending_y.clone().sum();
let mut descending_y = ascending_y.clone();
descending_y.0.reverse();
let d_descending_y = d.clone() * &descending_y;
let d_descending_y_plus_z = d_descending_y + z;
let y_mn_plus_one = descending_y[0] * y;
let mut commitment_accum = EdwardsPoint::identity();
for (j, commitment) in V.0.iter().enumerate() {
commitment_accum += *commitment * z_pow[j];
}
let neg_z = -z;
let mut A_terms = Vec::with_capacity((generators.len() * 2) + 2);
for (i, d_y_z) in d_descending_y_plus_z.0.iter().enumerate() {
A_terms.push((neg_z, generators.generator(GeneratorsList::GBold1, i)));
A_terms.push((*d_y_z, generators.generator(GeneratorsList::HBold1, i)));
}
A_terms.push((y_mn_plus_one, commitment_accum));
A_terms.push((
((y_pows * z) - (d.sum() * y_mn_plus_one * z) - (y_pows * z.square())),
Generators::g(),
));
(
y,
d_descending_y_plus_z,
y_mn_plus_one,
z,
ScalarVector(z_pow),
A + multiexp_vartime(&A_terms),
)
}
pub(crate) fn prove<R: RngCore + CryptoRng>(
self,
rng: &mut R,
witness: &AggregateRangeWitness,
) -> Option<AggregateRangeProof> {
// Check for consistency with the witness
if self.V.len() != witness.values.len() {
return None;
}
for (commitment, (value, gamma)) in
self.V.iter().zip(witness.values.iter().zip(witness.gammas.iter()))
{
if Commitment::new(**gamma, *value).calculate() != **commitment {
return None;
}
}
let Self { generators, V } = self;
// Monero expects all of these points to be torsion-free
// Generally, for Bulletproofs, it sends points * INV_EIGHT and then performs a torsion clear
// by multiplying by 8
// This also restores the original value due to the preprocessing
// Commitments aren't transmitted INV_EIGHT though, so this multiplies by INV_EIGHT to enable
// clearing its cofactor without mutating the value
// For some reason, these values are transcripted * INV_EIGHT, not as transmitted
let mut V = V.into_iter().map(|V| EdwardsPoint(V.0 * crate::INV_EIGHT())).collect::<Vec<_>>();
let mut transcript = initial_transcript(V.iter());
V.iter_mut().for_each(|V| *V = V.mul_by_cofactor());
// Pad V
while V.len() < padded_pow_of_2(V.len()) {
V.push(EdwardsPoint::identity());
}
let generators = generators.reduce(V.len() * N);
let mut d_js = Vec::with_capacity(V.len());
let mut a_l = ScalarVector(Vec::with_capacity(V.len() * N));
for j in 1 ..= V.len() {
d_js.push(Self::d_j(j, V.len()));
a_l.0.append(&mut u64_decompose(*witness.values.get(j - 1).unwrap_or(&0)).0);
}
let a_r = a_l.clone() - Scalar::ONE;
let alpha = Scalar::random(&mut *rng);
let mut A_terms = Vec::with_capacity((generators.len() * 2) + 1);
for (i, a_l) in a_l.0.iter().enumerate() {
A_terms.push((*a_l, generators.generator(GeneratorsList::GBold1, i)));
}
for (i, a_r) in a_r.0.iter().enumerate() {
A_terms.push((*a_r, generators.generator(GeneratorsList::HBold1, i)));
}
A_terms.push((alpha, Generators::h()));
let mut A = multiexp(&A_terms);
A_terms.zeroize();
// Multiply by INV_EIGHT per earlier commentary
A.0 *= crate::INV_EIGHT();
let (y, d_descending_y_plus_z, y_mn_plus_one, z, z_pow, A_hat) =
Self::compute_A_hat(PointVector(V), &generators, &mut transcript, A);
let a_l = a_l - z;
let a_r = a_r + &d_descending_y_plus_z;
let mut alpha = alpha;
for j in 1 ..= witness.gammas.len() {
alpha += z_pow[j - 1] * witness.gammas[j - 1] * y_mn_plus_one;
}
Some(AggregateRangeProof {
A,
wip: WipStatement::new(generators, A_hat, y)
.prove(rng, transcript, &Zeroizing::new(WipWitness::new(a_l, a_r, alpha).unwrap()))
.unwrap(),
})
}
pub(crate) fn verify<Id: Copy + Zeroize, R: RngCore + CryptoRng>(
self,
rng: &mut R,
verifier: &mut BatchVerifier<Id, EdwardsPoint>,
id: Id,
proof: AggregateRangeProof,
) -> bool {
let Self { generators, V } = self;
let mut V = V.into_iter().map(|V| EdwardsPoint(V.0 * crate::INV_EIGHT())).collect::<Vec<_>>();
let mut transcript = initial_transcript(V.iter());
V.iter_mut().for_each(|V| *V = V.mul_by_cofactor());
let generators = generators.reduce(V.len() * N);
let (y, _, _, _, _, A_hat) =
Self::compute_A_hat(PointVector(V), &generators, &mut transcript, proof.A);
WipStatement::new(generators, A_hat, y).verify(rng, verifier, id, transcript, proof.wip)
}
}

View File

@@ -1,85 +0,0 @@
#![allow(non_snake_case)]
use group::Group;
use dalek_ff_group::{Scalar, EdwardsPoint};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::ScalarVector;
mod point_vector;
pub(crate) use point_vector::PointVector;
pub(crate) mod transcript;
pub(crate) mod weighted_inner_product;
pub(crate) use weighted_inner_product::*;
pub(crate) mod aggregate_range_proof;
pub(crate) use aggregate_range_proof::*;
pub(crate) fn padded_pow_of_2(i: usize) -> usize {
let mut next_pow_of_2 = 1;
while next_pow_of_2 < i {
next_pow_of_2 <<= 1;
}
next_pow_of_2
}
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
pub(crate) enum GeneratorsList {
GBold1,
HBold1,
}
// TODO: Table these
#[derive(Clone, Debug)]
pub(crate) struct Generators {
g_bold1: &'static [EdwardsPoint],
h_bold1: &'static [EdwardsPoint],
}
mod generators {
use std_shims::sync::OnceLock;
use monero_generators::Generators;
include!(concat!(env!("OUT_DIR"), "/generators_plus.rs"));
}
impl Generators {
#[allow(clippy::new_without_default)]
pub(crate) fn new() -> Self {
let gens = generators::GENERATORS();
Generators { g_bold1: &gens.G, h_bold1: &gens.H }
}
pub(crate) fn len(&self) -> usize {
self.g_bold1.len()
}
pub(crate) fn g() -> EdwardsPoint {
dalek_ff_group::EdwardsPoint(crate::H())
}
pub(crate) fn h() -> EdwardsPoint {
EdwardsPoint::generator()
}
pub(crate) fn generator(&self, list: GeneratorsList, i: usize) -> EdwardsPoint {
match list {
GeneratorsList::GBold1 => self.g_bold1[i],
GeneratorsList::HBold1 => self.h_bold1[i],
}
}
pub(crate) fn reduce(&self, generators: usize) -> Self {
// Round to the nearest power of 2
let generators = padded_pow_of_2(generators);
assert!(generators <= self.g_bold1.len());
Generators { g_bold1: &self.g_bold1[.. generators], h_bold1: &self.h_bold1[.. generators] }
}
}
// Returns the little-endian decomposition.
fn u64_decompose(value: u64) -> ScalarVector {
let mut bits = ScalarVector::new(64);
for bit in 0 .. 64 {
bits[bit] = Scalar::from((value >> bit) & 1);
}
bits
}

View File

@@ -1,50 +0,0 @@
use core::ops::{Index, IndexMut};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
use dalek_ff_group::EdwardsPoint;
#[cfg(test)]
use multiexp::multiexp;
#[cfg(test)]
use crate::ringct::bulletproofs::plus::ScalarVector;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct PointVector(pub(crate) Vec<EdwardsPoint>);
impl Index<usize> for PointVector {
type Output = EdwardsPoint;
fn index(&self, index: usize) -> &EdwardsPoint {
&self.0[index]
}
}
impl IndexMut<usize> for PointVector {
fn index_mut(&mut self, index: usize) -> &mut EdwardsPoint {
&mut self.0[index]
}
}
impl PointVector {
#[cfg(test)]
pub(crate) fn multiexp(&self, vector: &ScalarVector) -> EdwardsPoint {
debug_assert_eq!(self.len(), vector.len());
let mut res = Vec::with_capacity(self.len());
for (point, scalar) in self.0.iter().copied().zip(vector.0.iter().copied()) {
res.push((scalar, point));
}
multiexp(&res)
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn split(mut self) -> (Self, Self) {
debug_assert!(self.len() > 1);
let r = self.0.split_off(self.0.len() / 2);
debug_assert_eq!(self.len(), r.len());
(self, PointVector(r))
}
}

View File

@@ -1,24 +0,0 @@
use std_shims::{sync::OnceLock, vec::Vec};
use dalek_ff_group::{Scalar, EdwardsPoint};
use monero_generators::{hash_to_point as raw_hash_to_point};
use crate::{hash, hash_to_scalar as dalek_hash};
// Monero starts BP+ transcripts with the following constant.
static TRANSCRIPT_CELL: OnceLock<[u8; 32]> = OnceLock::new();
pub(crate) fn TRANSCRIPT() -> [u8; 32] {
// Why this uses a hash_to_point is completely unknown.
*TRANSCRIPT_CELL
.get_or_init(|| raw_hash_to_point(hash(b"bulletproof_plus_transcript")).compress().to_bytes())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
pub(crate) fn initial_transcript(commitments: core::slice::Iter<'_, EdwardsPoint>) -> Scalar {
let commitments_hash =
hash_to_scalar(&commitments.flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>());
hash_to_scalar(&[TRANSCRIPT().as_ref(), &commitments_hash.to_bytes()].concat())
}

View File

@@ -1,444 +0,0 @@
use std_shims::vec::Vec;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use multiexp::{BatchVerifier, multiexp, multiexp_vartime};
use group::{
ff::{Field, PrimeField},
GroupEncoding,
};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::ringct::bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators, padded_pow_of_2, transcript::*,
};
// Figure 1
#[derive(Clone, Debug)]
pub(crate) struct WipStatement {
generators: Generators,
P: EdwardsPoint,
y: ScalarVector,
}
impl Zeroize for WipStatement {
fn zeroize(&mut self) {
self.P.zeroize();
self.y.zeroize();
}
}
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct WipWitness {
a: ScalarVector,
b: ScalarVector,
alpha: Scalar,
}
impl WipWitness {
pub(crate) fn new(mut a: ScalarVector, mut b: ScalarVector, alpha: Scalar) -> Option<Self> {
if a.0.is_empty() || (a.len() != b.len()) {
return None;
}
// Pad to the nearest power of 2
let missing = padded_pow_of_2(a.len()) - a.len();
a.0.reserve(missing);
b.0.reserve(missing);
for _ in 0 .. missing {
a.0.push(Scalar::ZERO);
b.0.push(Scalar::ZERO);
}
Some(Self { a, b, alpha })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct WipProof {
pub(crate) L: Vec<EdwardsPoint>,
pub(crate) R: Vec<EdwardsPoint>,
pub(crate) A: EdwardsPoint,
pub(crate) B: EdwardsPoint,
pub(crate) r_answer: Scalar,
pub(crate) s_answer: Scalar,
pub(crate) delta_answer: Scalar,
}
impl WipStatement {
pub(crate) fn new(generators: Generators, P: EdwardsPoint, y: Scalar) -> Self {
debug_assert_eq!(generators.len(), padded_pow_of_2(generators.len()));
// y ** n
let mut y_vec = ScalarVector::new(generators.len());
y_vec[0] = y;
for i in 1 .. y_vec.len() {
y_vec[i] = y_vec[i - 1] * y;
}
Self { generators, P, y: y_vec }
}
fn transcript_L_R(transcript: &mut Scalar, L: EdwardsPoint, R: EdwardsPoint) -> Scalar {
let e = hash_to_scalar(
&[transcript.to_repr().as_ref(), L.to_bytes().as_ref(), R.to_bytes().as_ref()].concat(),
);
*transcript = e;
e
}
fn transcript_A_B(transcript: &mut Scalar, A: EdwardsPoint, B: EdwardsPoint) -> Scalar {
let e = hash_to_scalar(
&[transcript.to_repr().as_ref(), A.to_bytes().as_ref(), B.to_bytes().as_ref()].concat(),
);
*transcript = e;
e
}
// Prover's variant of the shared code block to calculate G/H/P when n > 1
// Returns each permutation of G/H since the prover needs to do operation on each permutation
// P is dropped as it's unused in the prover's path
// TODO: It'd still probably be faster to keep in terms of the original generators, both between
// the reduced amount of group operations and the potential tabling of the generators under
// multiexp
#[allow(clippy::too_many_arguments)]
fn next_G_H(
transcript: &mut Scalar,
mut g_bold1: PointVector,
mut g_bold2: PointVector,
mut h_bold1: PointVector,
mut h_bold2: PointVector,
L: EdwardsPoint,
R: EdwardsPoint,
y_inv_n_hat: Scalar,
) -> (Scalar, Scalar, Scalar, Scalar, PointVector, PointVector) {
debug_assert_eq!(g_bold1.len(), g_bold2.len());
debug_assert_eq!(h_bold1.len(), h_bold2.len());
debug_assert_eq!(g_bold1.len(), h_bold1.len());
let e = Self::transcript_L_R(transcript, L, R);
let inv_e = e.invert().unwrap();
// This vartime is safe as all of these arguments are public
let mut new_g_bold = Vec::with_capacity(g_bold1.len());
let e_y_inv = e * y_inv_n_hat;
for g_bold in g_bold1.0.drain(..).zip(g_bold2.0.drain(..)) {
new_g_bold.push(multiexp_vartime(&[(inv_e, g_bold.0), (e_y_inv, g_bold.1)]));
}
let mut new_h_bold = Vec::with_capacity(h_bold1.len());
for h_bold in h_bold1.0.drain(..).zip(h_bold2.0.drain(..)) {
new_h_bold.push(multiexp_vartime(&[(e, h_bold.0), (inv_e, h_bold.1)]));
}
let e_square = e.square();
let inv_e_square = inv_e.square();
(e, inv_e, e_square, inv_e_square, PointVector(new_g_bold), PointVector(new_h_bold))
}
/*
This has room for optimization worth investigating further. It currently takes
an iterative approach. It can be optimized further via divide and conquer.
Assume there are 4 challenges.
Iterative approach (current):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across that result and column 2.
3. Do the optimal multiplications across that result and column 3.
Divide and conquer (worth investigating further):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across challenge column 2 and 3.
3. Multiply both results together.
When there are 4 challenges (n=16), the iterative approach does 28 multiplications
versus divide and conquer's 24.
*/
fn challenge_products(challenges: &[(Scalar, Scalar)]) -> Vec<Scalar> {
let mut products = vec![Scalar::ONE; 1 << challenges.len()];
if !challenges.is_empty() {
products[0] = challenges[0].1;
products[1] = challenges[0].0;
for (j, challenge) in challenges.iter().enumerate().skip(1) {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * challenge.0;
products[slots - 1] = products[slots / 2] * challenge.1;
slots = slots.saturating_sub(2);
}
}
// Sanity check since if the above failed to populate, it'd be critical
for product in &products {
debug_assert!(!bool::from(product.is_zero()));
}
}
products
}
pub(crate) fn prove<R: RngCore + CryptoRng>(
self,
rng: &mut R,
mut transcript: Scalar,
witness: &WipWitness,
) -> Option<WipProof> {
let WipStatement { generators, P, mut y } = self;
#[cfg(not(debug_assertions))]
let _ = P;
if generators.len() != witness.a.len() {
return None;
}
let (g, h) = (Generators::g(), Generators::h());
let mut g_bold = vec![];
let mut h_bold = vec![];
for i in 0 .. generators.len() {
g_bold.push(generators.generator(GeneratorsList::GBold1, i));
h_bold.push(generators.generator(GeneratorsList::HBold1, i));
}
let mut g_bold = PointVector(g_bold);
let mut h_bold = PointVector(h_bold);
// Check P has the expected relationship
#[cfg(debug_assertions)]
{
let mut P_terms = witness
.a
.0
.iter()
.copied()
.zip(g_bold.0.iter().copied())
.chain(witness.b.0.iter().copied().zip(h_bold.0.iter().copied()))
.collect::<Vec<_>>();
P_terms.push((witness.a.clone().weighted_inner_product(&witness.b, &y), g));
P_terms.push((witness.alpha, h));
debug_assert_eq!(multiexp(&P_terms), P);
P_terms.zeroize();
}
let mut a = witness.a.clone();
let mut b = witness.b.clone();
let mut alpha = witness.alpha;
// From here on, g_bold.len() is used as n
debug_assert_eq!(g_bold.len(), a.len());
let mut L_vec = vec![];
let mut R_vec = vec![];
// else n > 1 case from figure 1
while g_bold.len() > 1 {
let (a1, a2) = a.clone().split();
let (b1, b2) = b.clone().split();
let (g_bold1, g_bold2) = g_bold.split();
let (h_bold1, h_bold2) = h_bold.split();
let n_hat = g_bold1.len();
debug_assert_eq!(a1.len(), n_hat);
debug_assert_eq!(a2.len(), n_hat);
debug_assert_eq!(b1.len(), n_hat);
debug_assert_eq!(b2.len(), n_hat);
debug_assert_eq!(g_bold1.len(), n_hat);
debug_assert_eq!(g_bold2.len(), n_hat);
debug_assert_eq!(h_bold1.len(), n_hat);
debug_assert_eq!(h_bold2.len(), n_hat);
let y_n_hat = y[n_hat - 1];
y.0.truncate(n_hat);
let d_l = Scalar::random(&mut *rng);
let d_r = Scalar::random(&mut *rng);
let c_l = a1.clone().weighted_inner_product(&b2, &y);
let c_r = (a2.clone() * y_n_hat).weighted_inner_product(&b1, &y);
// TODO: Calculate these with a batch inversion
let y_inv_n_hat = y_n_hat.invert().unwrap();
let mut L_terms = (a1.clone() * y_inv_n_hat)
.0
.drain(..)
.zip(g_bold2.0.iter().copied())
.chain(b2.0.iter().copied().zip(h_bold1.0.iter().copied()))
.collect::<Vec<_>>();
L_terms.push((c_l, g));
L_terms.push((d_l, h));
let L = multiexp(&L_terms) * Scalar(crate::INV_EIGHT());
L_vec.push(L);
L_terms.zeroize();
let mut R_terms = (a2.clone() * y_n_hat)
.0
.drain(..)
.zip(g_bold1.0.iter().copied())
.chain(b1.0.iter().copied().zip(h_bold2.0.iter().copied()))
.collect::<Vec<_>>();
R_terms.push((c_r, g));
R_terms.push((d_r, h));
let R = multiexp(&R_terms) * Scalar(crate::INV_EIGHT());
R_vec.push(R);
R_terms.zeroize();
let (e, inv_e, e_square, inv_e_square);
(e, inv_e, e_square, inv_e_square, g_bold, h_bold) =
Self::next_G_H(&mut transcript, g_bold1, g_bold2, h_bold1, h_bold2, L, R, y_inv_n_hat);
a = (a1 * e) + &(a2 * (y_n_hat * inv_e));
b = (b1 * inv_e) + &(b2 * e);
alpha += (d_l * e_square) + (d_r * inv_e_square);
debug_assert_eq!(g_bold.len(), a.len());
debug_assert_eq!(g_bold.len(), h_bold.len());
debug_assert_eq!(g_bold.len(), b.len());
}
// n == 1 case from figure 1
debug_assert_eq!(g_bold.len(), 1);
debug_assert_eq!(h_bold.len(), 1);
debug_assert_eq!(a.len(), 1);
debug_assert_eq!(b.len(), 1);
let r = Scalar::random(&mut *rng);
let s = Scalar::random(&mut *rng);
let delta = Scalar::random(&mut *rng);
let eta = Scalar::random(&mut *rng);
let ry = r * y[0];
let mut A_terms =
vec![(r, g_bold[0]), (s, h_bold[0]), ((ry * b[0]) + (s * y[0] * a[0]), g), (delta, h)];
let A = multiexp(&A_terms) * Scalar(crate::INV_EIGHT());
A_terms.zeroize();
let mut B_terms = vec![(ry * s, g), (eta, h)];
let B = multiexp(&B_terms) * Scalar(crate::INV_EIGHT());
B_terms.zeroize();
let e = Self::transcript_A_B(&mut transcript, A, B);
let r_answer = r + (a[0] * e);
let s_answer = s + (b[0] * e);
let delta_answer = eta + (delta * e) + (alpha * e.square());
Some(WipProof { L: L_vec, R: R_vec, A, B, r_answer, s_answer, delta_answer })
}
pub(crate) fn verify<Id: Copy + Zeroize, R: RngCore + CryptoRng>(
self,
rng: &mut R,
verifier: &mut BatchVerifier<Id, EdwardsPoint>,
id: Id,
mut transcript: Scalar,
mut proof: WipProof,
) -> bool {
let WipStatement { generators, P, y } = self;
let (g, h) = (Generators::g(), Generators::h());
// Verify the L/R lengths
{
let mut lr_len = 0;
while (1 << lr_len) < generators.len() {
lr_len += 1;
}
if (proof.L.len() != lr_len) ||
(proof.R.len() != lr_len) ||
(generators.len() != (1 << lr_len))
{
return false;
}
}
let inv_y = {
let inv_y = y[0].invert().unwrap();
let mut res = Vec::with_capacity(y.len());
res.push(inv_y);
while res.len() < y.len() {
res.push(inv_y * res.last().unwrap());
}
res
};
let mut P_terms = vec![(Scalar::ONE, P)];
P_terms.reserve(6 + (2 * generators.len()) + proof.L.len());
let mut challenges = Vec::with_capacity(proof.L.len());
let product_cache = {
let mut es = Vec::with_capacity(proof.L.len());
for (L, R) in proof.L.iter_mut().zip(proof.R.iter_mut()) {
es.push(Self::transcript_L_R(&mut transcript, *L, *R));
*L = L.mul_by_cofactor();
*R = R.mul_by_cofactor();
}
let mut inv_es = es.clone();
let mut scratch = vec![Scalar::ZERO; es.len()];
group::ff::BatchInverter::invert_with_external_scratch(&mut inv_es, &mut scratch);
drop(scratch);
debug_assert_eq!(es.len(), inv_es.len());
debug_assert_eq!(es.len(), proof.L.len());
debug_assert_eq!(es.len(), proof.R.len());
for ((e, inv_e), (L, R)) in
es.drain(..).zip(inv_es.drain(..)).zip(proof.L.iter().zip(proof.R.iter()))
{
debug_assert_eq!(e.invert().unwrap(), inv_e);
challenges.push((e, inv_e));
let e_square = e.square();
let inv_e_square = inv_e.square();
P_terms.push((e_square, *L));
P_terms.push((inv_e_square, *R));
}
Self::challenge_products(&challenges)
};
let e = Self::transcript_A_B(&mut transcript, proof.A, proof.B);
proof.A = proof.A.mul_by_cofactor();
proof.B = proof.B.mul_by_cofactor();
let neg_e_square = -e.square();
let mut multiexp = P_terms;
multiexp.reserve(4 + (2 * generators.len()));
for (scalar, _) in &mut multiexp {
*scalar *= neg_e_square;
}
let re = proof.r_answer * e;
for i in 0 .. generators.len() {
let mut scalar = product_cache[i] * re;
if i > 0 {
scalar *= inv_y[i - 1];
}
multiexp.push((scalar, generators.generator(GeneratorsList::GBold1, i)));
}
let se = proof.s_answer * e;
for i in 0 .. generators.len() {
multiexp.push((
se * product_cache[product_cache.len() - 1 - i],
generators.generator(GeneratorsList::HBold1, i),
));
}
multiexp.push((-e, proof.A));
multiexp.push((proof.r_answer * y[0] * proof.s_answer, g));
multiexp.push((proof.delta_answer, h));
multiexp.push((-Scalar::ONE, proof.B));
verifier.queue(rng, id, multiexp);
true
}
}

View File

@@ -1,138 +0,0 @@
use core::{
borrow::Borrow,
ops::{Index, IndexMut, Add, Sub, Mul},
};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
use group::ff::Field;
use dalek_ff_group::{Scalar, EdwardsPoint};
use multiexp::multiexp;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct ScalarVector(pub(crate) Vec<Scalar>);
impl Index<usize> for ScalarVector {
type Output = Scalar;
fn index(&self, index: usize) -> &Scalar {
&self.0[index]
}
}
impl IndexMut<usize> for ScalarVector {
fn index_mut(&mut self, index: usize) -> &mut Scalar {
&mut self.0[index]
}
}
impl<S: Borrow<Scalar>> Add<S> for ScalarVector {
type Output = ScalarVector;
fn add(mut self, scalar: S) -> ScalarVector {
for s in &mut self.0 {
*s += scalar.borrow();
}
self
}
}
impl<S: Borrow<Scalar>> Sub<S> for ScalarVector {
type Output = ScalarVector;
fn sub(mut self, scalar: S) -> ScalarVector {
for s in &mut self.0 {
*s -= scalar.borrow();
}
self
}
}
impl<S: Borrow<Scalar>> Mul<S> for ScalarVector {
type Output = ScalarVector;
fn mul(mut self, scalar: S) -> ScalarVector {
for s in &mut self.0 {
*s *= scalar.borrow();
}
self
}
}
impl Add<&ScalarVector> for ScalarVector {
type Output = ScalarVector;
fn add(mut self, other: &ScalarVector) -> ScalarVector {
debug_assert_eq!(self.len(), other.len());
for (s, o) in self.0.iter_mut().zip(other.0.iter()) {
*s += o;
}
self
}
}
impl Sub<&ScalarVector> for ScalarVector {
type Output = ScalarVector;
fn sub(mut self, other: &ScalarVector) -> ScalarVector {
debug_assert_eq!(self.len(), other.len());
for (s, o) in self.0.iter_mut().zip(other.0.iter()) {
*s -= o;
}
self
}
}
impl Mul<&ScalarVector> for ScalarVector {
type Output = ScalarVector;
fn mul(mut self, other: &ScalarVector) -> ScalarVector {
debug_assert_eq!(self.len(), other.len());
for (s, o) in self.0.iter_mut().zip(other.0.iter()) {
*s *= o;
}
self
}
}
impl Mul<&[EdwardsPoint]> for &ScalarVector {
type Output = EdwardsPoint;
fn mul(self, b: &[EdwardsPoint]) -> EdwardsPoint {
debug_assert_eq!(self.len(), b.len());
let mut multiexp_args = self.0.iter().copied().zip(b.iter().copied()).collect::<Vec<_>>();
let res = multiexp(&multiexp_args);
multiexp_args.zeroize();
res
}
}
impl ScalarVector {
pub(crate) fn new(len: usize) -> Self {
ScalarVector(vec![Scalar::ZERO; len])
}
pub(crate) fn powers(x: Scalar, len: usize) -> Self {
debug_assert!(len != 0);
let mut res = Vec::with_capacity(len);
res.push(Scalar::ONE);
res.push(x);
for i in 2 .. len {
res.push(res[i - 1] * x);
}
res.truncate(len);
ScalarVector(res)
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn sum(mut self) -> Scalar {
self.0.drain(..).sum()
}
pub(crate) fn inner_product(self, vector: &Self) -> Scalar {
(self * vector).sum()
}
pub(crate) fn weighted_inner_product(self, vector: &Self, y: &Self) -> Scalar {
(self * vector * y).sum()
}
pub(crate) fn split(mut self) -> (Self, Self) {
debug_assert!(self.len() > 1);
let r = self.0.split_off(self.0.len() / 2);
debug_assert_eq!(self.len(), r.len());
(self, ScalarVector(r))
}
}

View File

@@ -1,324 +0,0 @@
#![allow(non_snake_case)]
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use subtle::{ConstantTimeEq, Choice, CtOption};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
traits::{IsIdentity, VartimePrecomputedMultiscalarMul},
edwards::{EdwardsPoint, VartimeEdwardsPrecomputation},
};
use crate::{
INV_EIGHT, Commitment, random_scalar, hash_to_scalar, wallet::decoys::Decoys,
ringct::hash_to_point, serialize::*,
};
#[cfg(feature = "multisig")]
mod multisig;
#[cfg(feature = "multisig")]
pub use multisig::{ClsagDetails, ClsagAddendum, ClsagMultisig};
#[cfg(feature = "multisig")]
pub(crate) use multisig::add_key_image_share;
/// Errors returned when CLSAG signing fails.
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum ClsagError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("invalid ring"))]
InvalidRing,
#[cfg_attr(feature = "std", error("invalid ring member (member {0}, ring size {1})"))]
InvalidRingMember(u8, u8),
#[cfg_attr(feature = "std", error("invalid commitment"))]
InvalidCommitment,
#[cfg_attr(feature = "std", error("invalid key image"))]
InvalidImage,
#[cfg_attr(feature = "std", error("invalid D"))]
InvalidD,
#[cfg_attr(feature = "std", error("invalid s"))]
InvalidS,
#[cfg_attr(feature = "std", error("invalid c1"))]
InvalidC1,
}
/// Input being signed for.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ClsagInput {
// The actual commitment for the true spend
pub(crate) commitment: Commitment,
// True spend index, offsets, and ring
pub(crate) decoys: Decoys,
}
impl ClsagInput {
pub fn new(commitment: Commitment, decoys: Decoys) -> Result<ClsagInput, ClsagError> {
let n = decoys.len();
if n > u8::MAX.into() {
Err(ClsagError::InternalError("max ring size in this library is u8 max"))?;
}
let n = u8::try_from(n).unwrap();
if decoys.i >= n {
Err(ClsagError::InvalidRingMember(decoys.i, n))?;
}
// Validate the commitment matches
if decoys.ring[usize::from(decoys.i)][1] != commitment.calculate() {
Err(ClsagError::InvalidCommitment)?;
}
Ok(ClsagInput { commitment, decoys })
}
}
#[allow(clippy::large_enum_variant)]
enum Mode {
Sign(usize, EdwardsPoint, EdwardsPoint),
Verify(Scalar),
}
// Core of the CLSAG algorithm, applicable to both sign and verify with minimal differences
// Said differences are covered via the above Mode
fn core(
ring: &[[EdwardsPoint; 2]],
I: &EdwardsPoint,
pseudo_out: &EdwardsPoint,
msg: &[u8; 32],
D: &EdwardsPoint,
s: &[Scalar],
A_c1: &Mode,
) -> ((EdwardsPoint, Scalar, Scalar), Scalar) {
let n = ring.len();
let images_precomp = VartimeEdwardsPrecomputation::new([I, D]);
let D = D * INV_EIGHT();
// Generate the transcript
// Instead of generating multiple, a single transcript is created and then edited as needed
const PREFIX: &[u8] = b"CLSAG_";
#[rustfmt::skip]
const AGG_0: &[u8] = b"agg_0";
#[rustfmt::skip]
const ROUND: &[u8] = b"round";
const PREFIX_AGG_0_LEN: usize = PREFIX.len() + AGG_0.len();
let mut to_hash = Vec::with_capacity(((2 * n) + 5) * 32);
to_hash.extend(PREFIX);
to_hash.extend(AGG_0);
to_hash.extend([0; 32 - PREFIX_AGG_0_LEN]);
let mut P = Vec::with_capacity(n);
for member in ring {
P.push(member[0]);
to_hash.extend(member[0].compress().to_bytes());
}
let mut C = Vec::with_capacity(n);
for member in ring {
C.push(member[1] - pseudo_out);
to_hash.extend(member[1].compress().to_bytes());
}
to_hash.extend(I.compress().to_bytes());
to_hash.extend(D.compress().to_bytes());
to_hash.extend(pseudo_out.compress().to_bytes());
// mu_P with agg_0
let mu_P = hash_to_scalar(&to_hash);
// mu_C with agg_1
to_hash[PREFIX_AGG_0_LEN - 1] = b'1';
let mu_C = hash_to_scalar(&to_hash);
// Truncate it for the round transcript, altering the DST as needed
to_hash.truncate(((2 * n) + 1) * 32);
for i in 0 .. ROUND.len() {
to_hash[PREFIX.len() + i] = ROUND[i];
}
// Unfortunately, it's I D pseudo_out instead of pseudo_out I D, meaning this needs to be
// truncated just to add it back
to_hash.extend(pseudo_out.compress().to_bytes());
to_hash.extend(msg);
// Configure the loop based on if we're signing or verifying
let start;
let end;
let mut c;
match A_c1 {
Mode::Sign(r, A, AH) => {
start = r + 1;
end = r + n;
to_hash.extend(A.compress().to_bytes());
to_hash.extend(AH.compress().to_bytes());
c = hash_to_scalar(&to_hash);
}
Mode::Verify(c1) => {
start = 0;
end = n;
c = *c1;
}
}
// Perform the core loop
let mut c1 = CtOption::new(Scalar::ZERO, Choice::from(0));
for i in (start .. end).map(|i| i % n) {
// This will only execute once and shouldn't need to be constant time. Making it constant time
// removes the risk of branch prediction creating timing differences depending on ring index
// however
c1 = c1.or_else(|| CtOption::new(c, i.ct_eq(&0)));
let c_p = mu_P * c;
let c_c = mu_C * c;
let L = (&s[i] * ED25519_BASEPOINT_TABLE) + (c_p * P[i]) + (c_c * C[i]);
let PH = hash_to_point(&P[i]);
// Shouldn't be an issue as all of the variables in this vartime statement are public
let R = (s[i] * PH) + images_precomp.vartime_multiscalar_mul([c_p, c_c]);
to_hash.truncate(((2 * n) + 3) * 32);
to_hash.extend(L.compress().to_bytes());
to_hash.extend(R.compress().to_bytes());
c = hash_to_scalar(&to_hash);
}
// This first tuple is needed to continue signing, the latter is the c to be tested/worked with
((D, c * mu_P, c * mu_C), c1.unwrap_or(c))
}
/// CLSAG signature, as used in Monero.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Clsag {
pub D: EdwardsPoint,
pub s: Vec<Scalar>,
pub c1: Scalar,
}
impl Clsag {
// Sign core is the extension of core as needed for signing, yet is shared between single signer
// and multisig, hence why it's still core
pub(crate) fn sign_core<R: RngCore + CryptoRng>(
rng: &mut R,
I: &EdwardsPoint,
input: &ClsagInput,
mask: Scalar,
msg: &[u8; 32],
A: EdwardsPoint,
AH: EdwardsPoint,
) -> (Clsag, EdwardsPoint, Scalar, Scalar) {
let r: usize = input.decoys.i.into();
let pseudo_out = Commitment::new(mask, input.commitment.amount).calculate();
let z = input.commitment.mask - mask;
let H = hash_to_point(&input.decoys.ring[r][0]);
let D = H * z;
let mut s = Vec::with_capacity(input.decoys.ring.len());
for _ in 0 .. input.decoys.ring.len() {
s.push(random_scalar(rng));
}
let ((D, p, c), c1) =
core(&input.decoys.ring, I, &pseudo_out, msg, &D, &s, &Mode::Sign(r, A, AH));
(Clsag { D, s, c1 }, pseudo_out, p, c * z)
}
/// Generate CLSAG signatures for the given inputs.
/// inputs is of the form (private key, key image, input).
/// sum_outputs is for the sum of the outputs' commitment masks.
pub fn sign<R: RngCore + CryptoRng>(
rng: &mut R,
mut inputs: Vec<(Zeroizing<Scalar>, EdwardsPoint, ClsagInput)>,
sum_outputs: Scalar,
msg: [u8; 32],
) -> Vec<(Clsag, EdwardsPoint)> {
let mut res = Vec::with_capacity(inputs.len());
let mut sum_pseudo_outs = Scalar::ZERO;
for i in 0 .. inputs.len() {
let mut mask = random_scalar(rng);
if i == (inputs.len() - 1) {
mask = sum_outputs - sum_pseudo_outs;
} else {
sum_pseudo_outs += mask;
}
let mut nonce = Zeroizing::new(random_scalar(rng));
let (mut clsag, pseudo_out, p, c) = Clsag::sign_core(
rng,
&inputs[i].1,
&inputs[i].2,
mask,
&msg,
nonce.deref() * ED25519_BASEPOINT_TABLE,
nonce.deref() *
hash_to_point(&inputs[i].2.decoys.ring[usize::from(inputs[i].2.decoys.i)][0]),
);
clsag.s[usize::from(inputs[i].2.decoys.i)] =
(-((p * inputs[i].0.deref()) + c)) + nonce.deref();
inputs[i].0.zeroize();
nonce.zeroize();
debug_assert!(clsag
.verify(&inputs[i].2.decoys.ring, &inputs[i].1, &pseudo_out, &msg)
.is_ok());
res.push((clsag, pseudo_out));
}
res
}
/// Verify the CLSAG signature against the given Transaction data.
pub fn verify(
&self,
ring: &[[EdwardsPoint; 2]],
I: &EdwardsPoint,
pseudo_out: &EdwardsPoint,
msg: &[u8; 32],
) -> Result<(), ClsagError> {
// Preliminary checks. s, c1, and points must also be encoded canonically, which isn't checked
// here
if ring.is_empty() {
Err(ClsagError::InvalidRing)?;
}
if ring.len() != self.s.len() {
Err(ClsagError::InvalidS)?;
}
if I.is_identity() {
Err(ClsagError::InvalidImage)?;
}
let D = self.D.mul_by_cofactor();
if D.is_identity() {
Err(ClsagError::InvalidD)?;
}
let (_, c1) = core(ring, I, pseudo_out, msg, &D, &self.s, &Mode::Verify(self.c1));
if c1 != self.c1 {
Err(ClsagError::InvalidC1)?;
}
Ok(())
}
pub(crate) fn fee_weight(ring_len: usize) -> usize {
(ring_len * 32) + 32 + 32
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_raw_vec(write_scalar, &self.s, w)?;
w.write_all(&self.c1.to_bytes())?;
write_point(&self.D, w)
}
pub fn read<R: Read>(decoys: usize, r: &mut R) -> io::Result<Clsag> {
Ok(Clsag { s: read_raw_vec(read_scalar, decoys, r)?, c1: read_scalar(r)?, D: read_point(r)? })
}
}

View File

@@ -1,305 +0,0 @@
use core::{ops::Deref, fmt::Debug};
use std_shims::io::{self, Read, Write};
use std::sync::{Arc, RwLock};
use rand_core::{RngCore, CryptoRng, SeedableRng};
use rand_chacha::ChaCha20Rng;
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
use group::{ff::Field, Group, GroupEncoding};
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group as dfg;
use dleq::DLEqProof;
use frost::{
dkg::lagrange,
curve::Ed25519,
Participant, FrostError, ThresholdKeys, ThresholdView,
algorithm::{WriteAddendum, Algorithm},
};
use crate::ringct::{
hash_to_point,
clsag::{ClsagInput, Clsag},
};
fn dleq_transcript() -> RecommendedTranscript {
RecommendedTranscript::new(b"monero_key_image_dleq")
}
impl ClsagInput {
fn transcript<T: Transcript>(&self, transcript: &mut T) {
// Doesn't domain separate as this is considered part of the larger CLSAG proof
// Ring index
transcript.append_message(b"real_spend", [self.decoys.i]);
// Ring
for (i, pair) in self.decoys.ring.iter().enumerate() {
// Doesn't include global output indexes as CLSAG doesn't care and won't be affected by it
// They're just a unreliable reference to this data which will be included in the message
// if in use
transcript.append_message(b"member", [u8::try_from(i).expect("ring size exceeded 255")]);
transcript.append_message(b"key", pair[0].compress().to_bytes());
transcript.append_message(b"commitment", pair[1].compress().to_bytes())
}
// Doesn't include the commitment's parts as the above ring + index includes the commitment
// The only potential malleability would be if the G/H relationship is known breaking the
// discrete log problem, which breaks everything already
}
}
/// CLSAG input and the mask to use for it.
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub struct ClsagDetails {
input: ClsagInput,
mask: Scalar,
}
impl ClsagDetails {
pub fn new(input: ClsagInput, mask: Scalar) -> ClsagDetails {
ClsagDetails { input, mask }
}
}
/// Addendum produced during the FROST signing process with relevant data.
#[derive(Clone, PartialEq, Eq, Zeroize, Debug)]
pub struct ClsagAddendum {
pub(crate) key_image: dfg::EdwardsPoint,
dleq: DLEqProof<dfg::EdwardsPoint>,
}
impl WriteAddendum for ClsagAddendum {
fn write<W: Write>(&self, writer: &mut W) -> io::Result<()> {
writer.write_all(self.key_image.compress().to_bytes().as_ref())?;
self.dleq.write(writer)
}
}
#[allow(non_snake_case)]
#[derive(Clone, PartialEq, Eq, Debug)]
struct Interim {
p: Scalar,
c: Scalar,
clsag: Clsag,
pseudo_out: EdwardsPoint,
}
/// FROST algorithm for producing a CLSAG signature.
#[allow(non_snake_case)]
#[derive(Clone, Debug)]
pub struct ClsagMultisig {
transcript: RecommendedTranscript,
pub(crate) H: EdwardsPoint,
// Merged here as CLSAG needs it, passing it would be a mess, yet having it beforehand requires
// an extra round
image: EdwardsPoint,
details: Arc<RwLock<Option<ClsagDetails>>>,
msg: Option<[u8; 32]>,
interim: Option<Interim>,
}
impl ClsagMultisig {
pub fn new(
transcript: RecommendedTranscript,
output_key: EdwardsPoint,
details: Arc<RwLock<Option<ClsagDetails>>>,
) -> ClsagMultisig {
ClsagMultisig {
transcript,
H: hash_to_point(&output_key),
image: EdwardsPoint::identity(),
details,
msg: None,
interim: None,
}
}
fn input(&self) -> ClsagInput {
(*self.details.read().unwrap()).as_ref().unwrap().input.clone()
}
fn mask(&self) -> Scalar {
(*self.details.read().unwrap()).as_ref().unwrap().mask
}
}
pub(crate) fn add_key_image_share(
image: &mut EdwardsPoint,
generator: EdwardsPoint,
offset: Scalar,
included: &[Participant],
participant: Participant,
share: EdwardsPoint,
) {
if image.is_identity().into() {
*image = generator * offset;
}
*image += share * lagrange::<dfg::Scalar>(participant, included).0;
}
impl Algorithm<Ed25519> for ClsagMultisig {
type Transcript = RecommendedTranscript;
type Addendum = ClsagAddendum;
type Signature = (Clsag, EdwardsPoint);
fn nonces(&self) -> Vec<Vec<dfg::EdwardsPoint>> {
vec![vec![dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)]]
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Ed25519>,
) -> ClsagAddendum {
ClsagAddendum {
key_image: dfg::EdwardsPoint(self.H) * keys.secret_share().deref(),
dleq: DLEqProof::prove(
rng,
// Doesn't take in a larger transcript object due to the usage of this
// Every prover would immediately write their own DLEq proof, when they can only do so in
// the proper order if they want to reach consensus
// It'd be a poor API to have CLSAG define a new transcript solely to pass here, just to
// try to merge later in some form, when it should instead just merge xH (as it does)
&mut dleq_transcript(),
&[dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)],
keys.secret_share(),
),
}
}
fn read_addendum<R: Read>(&self, reader: &mut R) -> io::Result<ClsagAddendum> {
let mut bytes = [0; 32];
reader.read_exact(&mut bytes)?;
// dfg ensures the point is torsion free
let xH = Option::<dfg::EdwardsPoint>::from(dfg::EdwardsPoint::from_bytes(&bytes))
.ok_or_else(|| io::Error::other("invalid key image"))?;
// Ensure this is a canonical point
if xH.to_bytes() != bytes {
Err(io::Error::other("non-canonical key image"))?;
}
Ok(ClsagAddendum { key_image: xH, dleq: DLEqProof::<dfg::EdwardsPoint>::read(reader)? })
}
fn process_addendum(
&mut self,
view: &ThresholdView<Ed25519>,
l: Participant,
addendum: ClsagAddendum,
) -> Result<(), FrostError> {
// TODO: This check is faulty if two shares are additive inverses of each other
if self.image.is_identity().into() {
self.transcript.domain_separate(b"CLSAG");
self.input().transcript(&mut self.transcript);
self.transcript.append_message(b"mask", self.mask().to_bytes());
}
self.transcript.append_message(b"participant", l.to_bytes());
addendum
.dleq
.verify(
&mut dleq_transcript(),
&[dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(self.H)],
&[view.original_verification_share(l), addendum.key_image],
)
.map_err(|_| FrostError::InvalidPreprocess(l))?;
self.transcript.append_message(b"key_image_share", addendum.key_image.compress().to_bytes());
add_key_image_share(
&mut self.image,
self.H,
view.offset().0,
view.included(),
l,
addendum.key_image.0,
);
Ok(())
}
fn transcript(&mut self) -> &mut Self::Transcript {
&mut self.transcript
}
fn sign_share(
&mut self,
view: &ThresholdView<Ed25519>,
nonce_sums: &[Vec<dfg::EdwardsPoint>],
nonces: Vec<Zeroizing<dfg::Scalar>>,
msg: &[u8],
) -> dfg::Scalar {
// Use the transcript to get a seeded random number generator
// The transcript contains private data, preventing passive adversaries from recreating this
// process even if they have access to commitments (specifically, the ring index being signed
// for, along with the mask which should not only require knowing the shared keys yet also the
// input commitment masks)
let mut rng = ChaCha20Rng::from_seed(self.transcript.rng_seed(b"decoy_responses"));
self.msg = Some(msg.try_into().expect("CLSAG message should be 32-bytes"));
#[allow(non_snake_case)]
let (clsag, pseudo_out, p, c) = Clsag::sign_core(
&mut rng,
&self.image,
&self.input(),
self.mask(),
self.msg.as_ref().unwrap(),
nonce_sums[0][0].0,
nonce_sums[0][1].0,
);
self.interim = Some(Interim { p, c, clsag, pseudo_out });
(-(dfg::Scalar(p) * view.secret_share().deref())) + nonces[0].deref()
}
#[must_use]
fn verify(
&self,
_: dfg::EdwardsPoint,
_: &[Vec<dfg::EdwardsPoint>],
sum: dfg::Scalar,
) -> Option<Self::Signature> {
let interim = self.interim.as_ref().unwrap();
let mut clsag = interim.clsag.clone();
clsag.s[usize::from(self.input().decoys.i)] = sum.0 - interim.c;
if clsag
.verify(
&self.input().decoys.ring,
&self.image,
&interim.pseudo_out,
self.msg.as_ref().unwrap(),
)
.is_ok()
{
return Some((clsag, interim.pseudo_out));
}
None
}
fn verify_share(
&self,
verification_share: dfg::EdwardsPoint,
nonces: &[Vec<dfg::EdwardsPoint>],
share: dfg::Scalar,
) -> Result<Vec<(dfg::Scalar, dfg::EdwardsPoint)>, ()> {
let interim = self.interim.as_ref().unwrap();
Ok(vec![
(share, dfg::EdwardsPoint::generator()),
(dfg::Scalar(interim.p), verification_share),
(-dfg::Scalar::ONE, nonces[0][0]),
])
}
}

View File

@@ -1,8 +0,0 @@
use curve25519_dalek::edwards::EdwardsPoint;
pub use monero_generators::{hash_to_point as raw_hash_to_point};
/// Monero's hash to point function, as named `ge_fromfe_frombytes_vartime`.
pub fn hash_to_point(key: &EdwardsPoint) -> EdwardsPoint {
raw_hash_to_point(key.compress().to_bytes())
}

View File

@@ -1,216 +0,0 @@
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use curve25519_dalek::{traits::IsIdentity, Scalar, EdwardsPoint};
use monero_generators::H;
use crate::{hash_to_scalar, ringct::hash_to_point, serialize::*};
#[derive(Clone, Copy, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum MlsagError {
#[cfg_attr(feature = "std", error("invalid ring"))]
InvalidRing,
#[cfg_attr(feature = "std", error("invalid amount of key images"))]
InvalidAmountOfKeyImages,
#[cfg_attr(feature = "std", error("invalid ss"))]
InvalidSs,
#[cfg_attr(feature = "std", error("key image was identity"))]
IdentityKeyImage,
#[cfg_attr(feature = "std", error("invalid ci"))]
InvalidCi,
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct RingMatrix {
matrix: Vec<Vec<EdwardsPoint>>,
}
impl RingMatrix {
pub fn new(matrix: Vec<Vec<EdwardsPoint>>) -> Result<Self, MlsagError> {
// Monero requires that there is more than one ring member for MLSAG signatures:
// https://github.com/monero-project/monero/blob/ac02af92867590ca80b2779a7bbeafa99ff94dcb/
// src/ringct/rctSigs.cpp#L462
if matrix.len() < 2 {
Err(MlsagError::InvalidRing)?;
}
for member in &matrix {
if member.is_empty() || (member.len() != matrix[0].len()) {
Err(MlsagError::InvalidRing)?;
}
}
Ok(RingMatrix { matrix })
}
/// Construct a ring matrix for an individual output.
pub fn individual(
ring: &[[EdwardsPoint; 2]],
pseudo_out: EdwardsPoint,
) -> Result<Self, MlsagError> {
let mut matrix = Vec::with_capacity(ring.len());
for ring_member in ring {
matrix.push(vec![ring_member[0], ring_member[1] - pseudo_out]);
}
RingMatrix::new(matrix)
}
pub fn iter(&self) -> impl Iterator<Item = &[EdwardsPoint]> {
self.matrix.iter().map(AsRef::as_ref)
}
/// Return the amount of members in the ring.
pub fn members(&self) -> usize {
self.matrix.len()
}
/// Returns the length of a ring member.
///
/// A ring member is a vector of points for which the signer knows all of the discrete logarithms
/// of.
pub fn member_len(&self) -> usize {
// this is safe to do as the constructors don't allow empty rings
self.matrix[0].len()
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Mlsag {
pub ss: Vec<Vec<Scalar>>,
pub cc: Scalar,
}
impl Mlsag {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for ss in &self.ss {
write_raw_vec(write_scalar, ss, w)?;
}
write_scalar(&self.cc, w)
}
pub fn read<R: Read>(mixins: usize, ss_2_elements: usize, r: &mut R) -> io::Result<Mlsag> {
Ok(Mlsag {
ss: (0 .. mixins)
.map(|_| read_raw_vec(read_scalar, ss_2_elements, r))
.collect::<Result<_, _>>()?,
cc: read_scalar(r)?,
})
}
pub fn verify(
&self,
msg: &[u8; 32],
ring: &RingMatrix,
key_images: &[EdwardsPoint],
) -> Result<(), MlsagError> {
// Mlsag allows for layers to not need linkability, hence they don't need key images
// Monero requires that there is always only 1 non-linkable layer - the amount commitments.
if ring.member_len() != (key_images.len() + 1) {
Err(MlsagError::InvalidAmountOfKeyImages)?;
}
let mut buf = Vec::with_capacity(6 * 32);
buf.extend_from_slice(msg);
let mut ci = self.cc;
// This is an iterator over the key images as options with an added entry of `None` at the
// end for the non-linkable layer
let key_images_iter = key_images.iter().map(|ki| Some(*ki)).chain(core::iter::once(None));
if ring.matrix.len() != self.ss.len() {
Err(MlsagError::InvalidSs)?;
}
for (ring_member, ss) in ring.iter().zip(&self.ss) {
if ring_member.len() != ss.len() {
Err(MlsagError::InvalidSs)?;
}
for ((ring_member_entry, s), ki) in ring_member.iter().zip(ss).zip(key_images_iter.clone()) {
#[allow(non_snake_case)]
let L = EdwardsPoint::vartime_double_scalar_mul_basepoint(&ci, ring_member_entry, s);
buf.extend_from_slice(ring_member_entry.compress().as_bytes());
buf.extend_from_slice(L.compress().as_bytes());
// Not all dimensions need to be linkable, e.g. commitments, and only linkable layers need
// to have key images.
if let Some(ki) = ki {
if ki.is_identity() {
Err(MlsagError::IdentityKeyImage)?;
}
#[allow(non_snake_case)]
let R = (s * hash_to_point(ring_member_entry)) + (ci * ki);
buf.extend_from_slice(R.compress().as_bytes());
}
}
ci = hash_to_scalar(&buf);
// keep the msg in the buffer.
buf.drain(msg.len() ..);
}
if ci != self.cc {
Err(MlsagError::InvalidCi)?
}
Ok(())
}
}
/// An aggregate ring matrix builder, usable to set up the ring matrix to prove/verify an aggregate
/// MLSAG signature.
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct AggregateRingMatrixBuilder {
key_ring: Vec<Vec<EdwardsPoint>>,
amounts_ring: Vec<EdwardsPoint>,
sum_out: EdwardsPoint,
}
impl AggregateRingMatrixBuilder {
/// Create a new AggregateRingMatrixBuilder.
///
/// Takes in the transaction's outputs; commitments and fee.
pub fn new(commitments: &[EdwardsPoint], fee: u64) -> Self {
AggregateRingMatrixBuilder {
key_ring: vec![],
amounts_ring: vec![],
sum_out: commitments.iter().sum::<EdwardsPoint>() + (H() * Scalar::from(fee)),
}
}
/// Push a ring of [output key, commitment] to the matrix.
pub fn push_ring(&mut self, ring: &[[EdwardsPoint; 2]]) -> Result<(), MlsagError> {
if self.key_ring.is_empty() {
self.key_ring = vec![vec![]; ring.len()];
// Now that we know the length of the ring, fill the `amounts_ring`.
self.amounts_ring = vec![-self.sum_out; ring.len()];
}
if (self.amounts_ring.len() != ring.len()) || ring.is_empty() {
// All the rings in an aggregate matrix must be the same length.
return Err(MlsagError::InvalidRing);
}
for (i, ring_member) in ring.iter().enumerate() {
self.key_ring[i].push(ring_member[0]);
self.amounts_ring[i] += ring_member[1]
}
Ok(())
}
/// Build and return the [`RingMatrix`]
pub fn build(mut self) -> Result<RingMatrix, MlsagError> {
for (i, amount_commitment) in self.amounts_ring.drain(..).enumerate() {
self.key_ring[i].push(amount_commitment);
}
RingMatrix::new(self.key_ring)
}
}

View File

@@ -1,400 +0,0 @@
use core::ops::Deref;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
pub(crate) mod hash_to_point;
pub use hash_to_point::{raw_hash_to_point, hash_to_point};
/// MLSAG struct, along with verifying functionality.
pub mod mlsag;
/// CLSAG struct, along with signing and verifying functionality.
pub mod clsag;
/// BorromeanRange struct, along with verifying functionality.
pub mod borromean;
/// Bulletproofs(+) structs, along with proving and verifying functionality.
pub mod bulletproofs;
use crate::{
Protocol,
serialize::*,
ringct::{mlsag::Mlsag, clsag::Clsag, borromean::BorromeanRange, bulletproofs::Bulletproofs},
};
/// Generate a key image for a given key. Defined as `x * hash_to_point(xG)`.
pub fn generate_key_image(secret: &Zeroizing<Scalar>) -> EdwardsPoint {
hash_to_point(&(ED25519_BASEPOINT_TABLE * secret.deref())) * secret.deref()
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum EncryptedAmount {
Original { mask: [u8; 32], amount: [u8; 32] },
Compact { amount: [u8; 8] },
}
impl EncryptedAmount {
pub fn read<R: Read>(compact: bool, r: &mut R) -> io::Result<EncryptedAmount> {
Ok(if !compact {
EncryptedAmount::Original { mask: read_bytes(r)?, amount: read_bytes(r)? }
} else {
EncryptedAmount::Compact { amount: read_bytes(r)? }
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
EncryptedAmount::Original { mask, amount } => {
w.write_all(mask)?;
w.write_all(amount)
}
EncryptedAmount::Compact { amount } => w.write_all(amount),
}
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum RctType {
/// No RCT proofs.
Null,
/// One MLSAG for multiple inputs and Borromean range proofs (RCTTypeFull).
MlsagAggregate,
// One MLSAG for each input and a Borromean range proof (RCTTypeSimple).
MlsagIndividual,
// One MLSAG for each input and a Bulletproof (RCTTypeBulletproof).
Bulletproofs,
/// One MLSAG for each input and a Bulletproof, yet starting to use EncryptedAmount::Compact
/// (RCTTypeBulletproof2).
BulletproofsCompactAmount,
/// One CLSAG for each input and a Bulletproof (RCTTypeCLSAG).
Clsag,
/// One CLSAG for each input and a Bulletproof+ (RCTTypeBulletproofPlus).
BulletproofsPlus,
}
impl RctType {
pub fn to_byte(self) -> u8 {
match self {
RctType::Null => 0,
RctType::MlsagAggregate => 1,
RctType::MlsagIndividual => 2,
RctType::Bulletproofs => 3,
RctType::BulletproofsCompactAmount => 4,
RctType::Clsag => 5,
RctType::BulletproofsPlus => 6,
}
}
pub fn from_byte(byte: u8) -> Option<Self> {
Some(match byte {
0 => RctType::Null,
1 => RctType::MlsagAggregate,
2 => RctType::MlsagIndividual,
3 => RctType::Bulletproofs,
4 => RctType::BulletproofsCompactAmount,
5 => RctType::Clsag,
6 => RctType::BulletproofsPlus,
_ => None?,
})
}
pub fn compact_encrypted_amounts(&self) -> bool {
match self {
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::Bulletproofs => false,
RctType::BulletproofsCompactAmount | RctType::Clsag | RctType::BulletproofsPlus => true,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctBase {
pub fee: u64,
pub pseudo_outs: Vec<EdwardsPoint>,
pub encrypted_amounts: Vec<EncryptedAmount>,
pub commitments: Vec<EdwardsPoint>,
}
impl RctBase {
pub(crate) fn fee_weight(outputs: usize, fee: u64) -> usize {
// 1 byte for the RCT signature type
1 + (outputs * (8 + 32)) + varint_len(fee)
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
w.write_all(&[rct_type.to_byte()])?;
match rct_type {
RctType::Null => Ok(()),
_ => {
write_varint(&self.fee, w)?;
if rct_type == RctType::MlsagIndividual {
write_raw_vec(write_point, &self.pseudo_outs, w)?;
}
for encrypted_amount in &self.encrypted_amounts {
encrypted_amount.write(w)?;
}
write_raw_vec(write_point, &self.commitments, w)
}
}
}
pub fn read<R: Read>(inputs: usize, outputs: usize, r: &mut R) -> io::Result<(RctBase, RctType)> {
let rct_type =
RctType::from_byte(read_byte(r)?).ok_or_else(|| io::Error::other("invalid RCT type"))?;
match rct_type {
RctType::Null | RctType::MlsagAggregate | RctType::MlsagIndividual => {}
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag |
RctType::BulletproofsPlus => {
if outputs == 0 {
// Because the Bulletproofs(+) layout must be canonical, there must be 1 Bulletproof if
// Bulletproofs are in use
// If there are Bulletproofs, there must be a matching amount of outputs, implicitly
// banning 0 outputs
// Since HF 12 (CLSAG being 13), a 2-output minimum has also been enforced
Err(io::Error::other("RCT with Bulletproofs(+) had 0 outputs"))?;
}
}
}
Ok((
if rct_type == RctType::Null {
RctBase { fee: 0, pseudo_outs: vec![], encrypted_amounts: vec![], commitments: vec![] }
} else {
RctBase {
fee: read_varint(r)?,
pseudo_outs: if rct_type == RctType::MlsagIndividual {
read_raw_vec(read_point, inputs, r)?
} else {
vec![]
},
encrypted_amounts: (0 .. outputs)
.map(|_| EncryptedAmount::read(rct_type.compact_encrypted_amounts(), r))
.collect::<Result<_, _>>()?,
commitments: read_raw_vec(read_point, outputs, r)?,
}
},
rct_type,
))
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum RctPrunable {
Null,
AggregateMlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsag: Mlsag,
},
MlsagBorromean {
borromean: Vec<BorromeanRange>,
mlsags: Vec<Mlsag>,
},
MlsagBulletproofs {
bulletproofs: Bulletproofs,
mlsags: Vec<Mlsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
Clsag {
bulletproofs: Bulletproofs,
clsags: Vec<Clsag>,
pseudo_outs: Vec<EdwardsPoint>,
},
}
impl RctPrunable {
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize) -> usize {
// 1 byte for number of BPs (technically a VarInt, yet there's always just zero or one)
1 + Bulletproofs::fee_weight(protocol.bp_plus(), outputs) +
(inputs * (Clsag::fee_weight(protocol.ring_len()) + 32))
}
pub fn write<W: Write>(&self, w: &mut W, rct_type: RctType) -> io::Result<()> {
match self {
RctPrunable::Null => Ok(()),
RctPrunable::AggregateMlsagBorromean { borromean, mlsag } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
mlsag.write(w)
}
RctPrunable::MlsagBorromean { borromean, mlsags } => {
write_raw_vec(BorromeanRange::write, borromean, w)?;
write_raw_vec(Mlsag::write, mlsags, w)
}
RctPrunable::MlsagBulletproofs { bulletproofs, mlsags, pseudo_outs } => {
if rct_type == RctType::Bulletproofs {
w.write_all(&1u32.to_le_bytes())?;
} else {
w.write_all(&[1])?;
}
bulletproofs.write(w)?;
write_raw_vec(Mlsag::write, mlsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs } => {
w.write_all(&[1])?;
bulletproofs.write(w)?;
write_raw_vec(Clsag::write, clsags, w)?;
write_raw_vec(write_point, pseudo_outs, w)
}
}
}
pub fn serialize(&self, rct_type: RctType) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized, rct_type).unwrap();
serialized
}
pub fn read<R: Read>(
rct_type: RctType,
ring_length: usize,
inputs: usize,
outputs: usize,
r: &mut R,
) -> io::Result<RctPrunable> {
// While we generally don't bother with misc consensus checks, this affects the safety of
// the below defined rct_type function
// The exact line preventing zero-input transactions is:
// https://github.com/monero-project/monero/blob/00fd416a99686f0956361d1cd0337fe56e58d4a7/
// src/ringct/rctSigs.cpp#L609
// And then for RctNull, that's only allowed for miner TXs which require one input of
// Input::Gen
if inputs == 0 {
Err(io::Error::other("transaction had no inputs"))?;
}
Ok(match rct_type {
RctType::Null => RctPrunable::Null,
RctType::MlsagAggregate => RctPrunable::AggregateMlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsag: Mlsag::read(ring_length, inputs + 1, r)?,
},
RctType::MlsagIndividual => RctPrunable::MlsagBorromean {
borromean: read_raw_vec(BorromeanRange::read, outputs, r)?,
mlsags: (0 .. inputs).map(|_| Mlsag::read(ring_length, 2, r)).collect::<Result<_, _>>()?,
},
RctType::Bulletproofs | RctType::BulletproofsCompactAmount => {
RctPrunable::MlsagBulletproofs {
bulletproofs: {
if (if rct_type == RctType::Bulletproofs {
u64::from(read_u32(r)?)
} else {
read_varint(r)?
}) != 1
{
Err(io::Error::other("n bulletproofs instead of one"))?;
}
Bulletproofs::read(r)?
},
mlsags: (0 .. inputs)
.map(|_| Mlsag::read(ring_length, 2, r))
.collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, inputs, r)?,
}
}
RctType::Clsag | RctType::BulletproofsPlus => RctPrunable::Clsag {
bulletproofs: {
if read_varint::<_, u64>(r)? != 1 {
Err(io::Error::other("n bulletproofs instead of one"))?;
}
(if rct_type == RctType::Clsag { Bulletproofs::read } else { Bulletproofs::read_plus })(
r,
)?
},
clsags: (0 .. inputs).map(|_| Clsag::read(ring_length, r)).collect::<Result<_, _>>()?,
pseudo_outs: read_raw_vec(read_point, inputs, r)?,
},
})
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
RctPrunable::Null => panic!("Serializing RctPrunable::Null for a signature"),
RctPrunable::AggregateMlsagBorromean { borromean, .. } |
RctPrunable::MlsagBorromean { borromean, .. } => {
borromean.iter().try_for_each(|rs| rs.write(w))
}
RctPrunable::MlsagBulletproofs { bulletproofs, .. } |
RctPrunable::Clsag { bulletproofs, .. } => bulletproofs.signature_write(w),
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct RctSignatures {
pub base: RctBase,
pub prunable: RctPrunable,
}
impl RctSignatures {
/// RctType for a given RctSignatures struct.
pub fn rct_type(&self) -> RctType {
match &self.prunable {
RctPrunable::Null => RctType::Null,
RctPrunable::AggregateMlsagBorromean { .. } => RctType::MlsagAggregate,
RctPrunable::MlsagBorromean { .. } => RctType::MlsagIndividual,
// RctBase ensures there's at least one output, making the following
// inferences guaranteed/expects impossible on any valid RctSignatures
RctPrunable::MlsagBulletproofs { .. } => {
if matches!(
self
.base
.encrypted_amounts
.first()
.expect("MLSAG with Bulletproofs didn't have any outputs"),
EncryptedAmount::Original { .. }
) {
RctType::Bulletproofs
} else {
RctType::BulletproofsCompactAmount
}
}
RctPrunable::Clsag { bulletproofs, .. } => {
if matches!(bulletproofs, Bulletproofs::Original { .. }) {
RctType::Clsag
} else {
RctType::BulletproofsPlus
}
}
}
}
pub(crate) fn fee_weight(protocol: Protocol, inputs: usize, outputs: usize, fee: u64) -> usize {
RctBase::fee_weight(outputs, fee) + RctPrunable::fee_weight(protocol, inputs, outputs)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
let rct_type = self.rct_type();
self.base.write(w, rct_type)?;
self.prunable.write(w, rct_type)
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(
ring_length: usize,
inputs: usize,
outputs: usize,
r: &mut R,
) -> io::Result<RctSignatures> {
let base = RctBase::read(inputs, outputs, r)?;
Ok(RctSignatures {
base: base.0,
prunable: RctPrunable::read(base.1, ring_length, inputs, outputs, r)?,
})
}
}

View File

@@ -1,286 +0,0 @@
use std::{sync::Arc, io::Read, time::Duration};
use async_trait::async_trait;
use tokio::sync::Mutex;
use digest_auth::{WwwAuthenticateHeader, AuthContext};
use simple_request::{
hyper::{StatusCode, header::HeaderValue, Request},
Response, Client,
};
use crate::rpc::{RpcError, RpcConnection, Rpc};
const DEFAULT_TIMEOUT: Duration = Duration::from_secs(30);
#[derive(Clone, Debug)]
enum Authentication {
// If unauthenticated, use a single client
Unauthenticated(Client),
// If authenticated, use a single client which supports being locked and tracks its nonce
// This ensures that if a nonce is requested, another caller doesn't make a request invalidating
// it
Authenticated {
username: String,
password: String,
#[allow(clippy::type_complexity)]
connection: Arc<Mutex<(Option<(WwwAuthenticateHeader, u64)>, Client)>>,
},
}
/// An HTTP(S) transport for the RPC.
///
/// Requires tokio.
#[derive(Clone, Debug)]
pub struct HttpRpc {
authentication: Authentication,
url: String,
request_timeout: Duration,
}
impl HttpRpc {
fn digest_auth_challenge(
response: &Response,
) -> Result<Option<(WwwAuthenticateHeader, u64)>, RpcError> {
Ok(if let Some(header) = response.headers().get("www-authenticate") {
Some((
digest_auth::parse(header.to_str().map_err(|_| {
RpcError::InvalidNode("www-authenticate header wasn't a string".to_string())
})?)
.map_err(|_| RpcError::InvalidNode("invalid digest-auth response".to_string()))?,
0,
))
} else {
None
})
}
/// Create a new HTTP(S) RPC connection.
///
/// A daemon requiring authentication can be used via including the username and password in the
/// URL.
pub async fn new(url: String) -> Result<Rpc<HttpRpc>, RpcError> {
Self::with_custom_timeout(url, DEFAULT_TIMEOUT).await
}
/// Create a new HTTP(S) RPC connection with a custom timeout.
///
/// A daemon requiring authentication can be used via including the username and password in the
/// URL.
pub async fn with_custom_timeout(
mut url: String,
request_timeout: Duration,
) -> Result<Rpc<HttpRpc>, RpcError> {
let authentication = if url.contains('@') {
// Parse out the username and password
let url_clone = url;
let split_url = url_clone.split('@').collect::<Vec<_>>();
if split_url.len() != 2 {
Err(RpcError::ConnectionError("invalid amount of login specifications".to_string()))?;
}
let mut userpass = split_url[0];
url = split_url[1].to_string();
// If there was additionally a protocol string, restore that to the daemon URL
if userpass.contains("://") {
let split_userpass = userpass.split("://").collect::<Vec<_>>();
if split_userpass.len() != 2 {
Err(RpcError::ConnectionError("invalid amount of protocol specifications".to_string()))?;
}
url = split_userpass[0].to_string() + "://" + &url;
userpass = split_userpass[1];
}
let split_userpass = userpass.split(':').collect::<Vec<_>>();
if split_userpass.len() > 2 {
Err(RpcError::ConnectionError("invalid amount of passwords".to_string()))?;
}
let client = Client::without_connection_pool(&url)
.map_err(|_| RpcError::ConnectionError("invalid URL".to_string()))?;
// Obtain the initial challenge, which also somewhat validates this connection
let challenge = Self::digest_auth_challenge(
&client
.request(
Request::post(url.clone())
.body(vec![].into())
.map_err(|e| RpcError::ConnectionError(format!("couldn't make request: {e:?}")))?,
)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?,
)?;
Authentication::Authenticated {
username: split_userpass[0].to_string(),
password: (*split_userpass.get(1).unwrap_or(&"")).to_string(),
connection: Arc::new(Mutex::new((challenge, client))),
}
} else {
Authentication::Unauthenticated(Client::with_connection_pool())
};
Ok(Rpc(HttpRpc { authentication, url, request_timeout }))
}
}
impl HttpRpc {
async fn inner_post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError> {
let request_fn = |uri| {
Request::post(uri)
.body(body.clone().into())
.map_err(|e| RpcError::ConnectionError(format!("couldn't make request: {e:?}")))
};
async fn body_from_response(response: Response<'_>) -> Result<Vec<u8>, RpcError> {
/*
let length = usize::try_from(
response
.headers()
.get("content-length")
.ok_or(RpcError::InvalidNode("no content-length header"))?
.to_str()
.map_err(|_| RpcError::InvalidNode("non-ascii content-length value"))?
.parse::<u32>()
.map_err(|_| RpcError::InvalidNode("non-u32 content-length value"))?,
)
.unwrap();
// Only pre-allocate 1 MB so a malicious node which claims a content-length of 1 GB actually
// has to send 1 GB of data to cause a 1 GB allocation
let mut res = Vec::with_capacity(length.max(1024 * 1024));
let mut body = response.into_body();
while res.len() < length {
let Some(data) = body.data().await else { break };
res.extend(data.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?.as_ref());
}
*/
let mut res = Vec::with_capacity(128);
response
.body()
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?
.read_to_end(&mut res)
.unwrap();
Ok(res)
}
for attempt in 0 .. 2 {
return Ok(match &self.authentication {
Authentication::Unauthenticated(client) => {
body_from_response(
client
.request(request_fn(self.url.clone() + "/" + route)?)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?,
)
.await?
}
Authentication::Authenticated { username, password, connection } => {
let mut connection_lock = connection.lock().await;
let mut request = request_fn("/".to_string() + route)?;
// If we don't have an auth challenge, obtain one
if connection_lock.0.is_none() {
connection_lock.0 = Self::digest_auth_challenge(
&connection_lock
.1
.request(request)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?,
)?;
request = request_fn("/".to_string() + route)?;
}
// Insert the challenge response, if we have a challenge
if let Some((challenge, cnonce)) = connection_lock.0.as_mut() {
// Update the cnonce
// Overflow isn't a concern as this is a u64
*cnonce += 1;
let mut context = AuthContext::new_post::<_, _, _, &[u8]>(
username,
password,
"/".to_string() + route,
None,
);
context.set_custom_cnonce(hex::encode(cnonce.to_le_bytes()));
request.headers_mut().insert(
"Authorization",
HeaderValue::from_str(
&challenge
.respond(&context)
.map_err(|_| {
RpcError::InvalidNode("couldn't respond to digest-auth challenge".to_string())
})?
.to_header_string(),
)
.unwrap(),
);
}
let response = connection_lock
.1
.request(request)
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")));
let (error, is_stale) = match &response {
Err(e) => (Some(e.clone()), false),
Ok(response) => (
None,
if response.status() == StatusCode::UNAUTHORIZED {
if let Some(header) = response.headers().get("www-authenticate") {
header
.to_str()
.map_err(|_| {
RpcError::InvalidNode("www-authenticate header wasn't a string".to_string())
})?
.contains("stale")
} else {
false
}
} else {
false
},
),
};
// If the connection entered an error state, drop the cached challenge as challenges are
// per-connection
// We don't need to create a new connection as simple-request will for us
if error.is_some() || is_stale {
connection_lock.0 = None;
// If we're not already on our second attempt, move to the next loop iteration
// (retrying all of this once)
if attempt == 0 {
continue;
}
if let Some(e) = error {
Err(e)?
} else {
debug_assert!(is_stale);
Err(RpcError::InvalidNode(
"node claimed fresh connection had stale authentication".to_string(),
))?
}
} else {
body_from_response(response.unwrap()).await?
}
}
});
}
unreachable!()
}
}
#[async_trait]
impl RpcConnection for HttpRpc {
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError> {
tokio::time::timeout(self.request_timeout, self.inner_post(route, body))
.await
.map_err(|e| RpcError::ConnectionError(format!("{e:?}")))?
}
}

View File

@@ -1,761 +0,0 @@
use core::fmt::Debug;
#[cfg(not(feature = "std"))]
use alloc::boxed::Box;
use std_shims::{
vec::Vec,
io,
string::{String, ToString},
};
use async_trait::async_trait;
use curve25519_dalek::edwards::EdwardsPoint;
use monero_generators::decompress_point;
use serde::{Serialize, Deserialize, de::DeserializeOwned};
use serde_json::{Value, json};
use crate::{
Protocol,
serialize::*,
transaction::{Input, Timelock, Transaction},
block::Block,
wallet::{FeePriority, Fee},
};
#[cfg(feature = "http-rpc")]
mod http;
#[cfg(feature = "http-rpc")]
pub use http::*;
// Number of blocks the fee estimate will be valid for
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L121
const GRACE_BLOCKS_FOR_FEE_ESTIMATE: u64 = 10;
#[derive(Deserialize, Debug)]
pub struct EmptyResponse {}
#[derive(Deserialize, Debug)]
pub struct JsonRpcResponse<T> {
result: T,
}
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
pruned_as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
#[derive(Deserialize, Debug)]
pub struct OutputResponse {
pub height: usize,
pub unlocked: bool,
key: String,
mask: String,
txid: String,
}
#[derive(Clone, PartialEq, Eq, Debug)]
#[cfg_attr(feature = "std", derive(thiserror::Error))]
pub enum RpcError {
#[cfg_attr(feature = "std", error("internal error ({0})"))]
InternalError(&'static str),
#[cfg_attr(feature = "std", error("connection error ({0})"))]
ConnectionError(String),
#[cfg_attr(feature = "std", error("invalid node ({0})"))]
InvalidNode(String),
#[cfg_attr(feature = "std", error("unsupported protocol version ({0})"))]
UnsupportedProtocol(usize),
#[cfg_attr(feature = "std", error("transactions not found"))]
TransactionsNotFound(Vec<[u8; 32]>),
#[cfg_attr(feature = "std", error("invalid point ({0})"))]
InvalidPoint(String),
#[cfg_attr(feature = "std", error("pruned transaction"))]
PrunedTransaction,
#[cfg_attr(feature = "std", error("invalid transaction ({0:?})"))]
InvalidTransaction([u8; 32]),
#[cfg_attr(feature = "std", error("unexpected fee response"))]
InvalidFee,
#[cfg_attr(feature = "std", error("invalid priority"))]
InvalidPriority,
}
fn rpc_hex(value: &str) -> Result<Vec<u8>, RpcError> {
hex::decode(value).map_err(|_| RpcError::InvalidNode("expected hex wasn't hex".to_string()))
}
fn hash_hex(hash: &str) -> Result<[u8; 32], RpcError> {
rpc_hex(hash)?.try_into().map_err(|_| RpcError::InvalidNode("hash wasn't 32-bytes".to_string()))
}
fn rpc_point(point: &str) -> Result<EdwardsPoint, RpcError> {
decompress_point(
rpc_hex(point)?.try_into().map_err(|_| RpcError::InvalidPoint(point.to_string()))?,
)
.ok_or_else(|| RpcError::InvalidPoint(point.to_string()))
}
// Read an EPEE VarInt, distinct from the VarInts used throughout the rest of the protocol
fn read_epee_vi<R: io::Read>(reader: &mut R) -> io::Result<u64> {
let vi_start = read_byte(reader)?;
let len = match vi_start & 0b11 {
0 => 1,
1 => 2,
2 => 4,
3 => 8,
_ => unreachable!(),
};
let mut vi = u64::from(vi_start >> 2);
for i in 1 .. len {
vi |= u64::from(read_byte(reader)?) << (((i - 1) * 8) + 6);
}
Ok(vi)
}
#[async_trait]
pub trait RpcConnection: Clone + Debug {
/// Perform a POST request to the specified route with the specified body.
///
/// The implementor is left to handle anything such as authentication.
async fn post(&self, route: &str, body: Vec<u8>) -> Result<Vec<u8>, RpcError>;
}
// TODO: Make this provided methods for RpcConnection?
#[derive(Clone, Debug)]
pub struct Rpc<R: RpcConnection>(R);
impl<R: RpcConnection> Rpc<R> {
/// Perform a RPC call to the specified route with the provided parameters.
///
/// This is NOT a JSON-RPC call. They use a route of "json_rpc" and are available via
/// `json_rpc_call`.
pub async fn rpc_call<Params: Serialize + Debug, Response: DeserializeOwned + Debug>(
&self,
route: &str,
params: Option<Params>,
) -> Result<Response, RpcError> {
let res = self
.0
.post(
route,
if let Some(params) = params {
serde_json::to_string(&params).unwrap().into_bytes()
} else {
vec![]
},
)
.await?;
let res_str = std_shims::str::from_utf8(&res)
.map_err(|_| RpcError::InvalidNode("response wasn't utf-8".to_string()))?;
serde_json::from_str(res_str)
.map_err(|_| RpcError::InvalidNode(format!("response wasn't json: {res_str}")))
}
/// Perform a JSON-RPC call with the specified method with the provided parameters
pub async fn json_rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: Option<Value>,
) -> Result<Response, RpcError> {
let mut req = json!({ "method": method });
if let Some(params) = params {
req.as_object_mut().unwrap().insert("params".into(), params);
}
Ok(self.rpc_call::<_, JsonRpcResponse<Response>>("json_rpc", Some(req)).await?.result)
}
/// Perform a binary call to the specified route with the provided parameters.
pub async fn bin_call(&self, route: &str, params: Vec<u8>) -> Result<Vec<u8>, RpcError> {
self.0.post(route, params).await
}
/// Get the active blockchain protocol version.
pub async fn get_protocol(&self) -> Result<Protocol, RpcError> {
#[derive(Deserialize, Debug)]
struct ProtocolResponse {
major_version: usize,
}
#[derive(Deserialize, Debug)]
struct LastHeaderResponse {
block_header: ProtocolResponse,
}
Ok(
match self
.json_rpc_call::<LastHeaderResponse>("get_last_block_header", None)
.await?
.block_header
.major_version
{
13 | 14 => Protocol::v14,
15 | 16 => Protocol::v16,
protocol => Err(RpcError::UnsupportedProtocol(protocol))?,
},
)
}
pub async fn get_height(&self) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct HeightResponse {
height: usize,
}
Ok(self.rpc_call::<Option<()>, HeightResponse>("get_height", None).await?.height)
}
pub async fn get_transactions(&self, hashes: &[[u8; 32]]) -> Result<Vec<Transaction>, RpcError> {
if hashes.is_empty() {
return Ok(vec![]);
}
let mut hashes_hex = hashes.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = Vec::with_capacity(hashes.len());
while !hashes_hex.is_empty() {
// Monero errors if more than 100 is requested unless using a non-restricted RPC
const TXS_PER_REQUEST: usize = 100;
let this_count = TXS_PER_REQUEST.min(hashes_hex.len());
let txs: TransactionsResponse = self
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. this_count).collect::<Vec<_>>(),
})),
)
.await?;
if !txs.missed_tx.is_empty() {
Err(RpcError::TransactionsNotFound(
txs.missed_tx.iter().map(|hash| hash_hex(hash)).collect::<Result<_, _>>()?,
))?;
}
all_txs.extend(txs.txs);
}
all_txs
.iter()
.enumerate()
.map(|(i, res)| {
let tx = Transaction::read::<&[u8]>(
&mut rpc_hex(if !res.as_hex.is_empty() { &res.as_hex } else { &res.pruned_as_hex })?
.as_ref(),
)
.map_err(|_| match hash_hex(&res.tx_hash) {
Ok(hash) => RpcError::InvalidTransaction(hash),
Err(err) => err,
})?;
// https://github.com/monero-project/monero/issues/8311
if res.as_hex.is_empty() {
match tx.prefix.inputs.first() {
Some(Input::Gen { .. }) => (),
_ => Err(RpcError::PrunedTransaction)?,
}
}
// This does run a few keccak256 hashes, which is pointless if the node is trusted
// In exchange, this provides resilience against invalid/malicious nodes
if tx.hash() != hashes[i] {
Err(RpcError::InvalidNode(
"replied with transaction wasn't the requested transaction".to_string(),
))?;
}
Ok(tx)
})
.collect()
}
pub async fn get_transaction(&self, tx: [u8; 32]) -> Result<Transaction, RpcError> {
self.get_transactions(&[tx]).await.map(|mut txs| txs.swap_remove(0))
}
/// Get the hash of a block from the node by the block's numbers.
/// This function does not verify the returned block hash is actually for the number in question.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
#[derive(Deserialize, Debug)]
struct BlockHeaderResponse {
hash: String,
}
#[derive(Deserialize, Debug)]
struct BlockHeaderByHeightResponse {
block_header: BlockHeaderResponse,
}
let header: BlockHeaderByHeightResponse =
self.json_rpc_call("get_block_header_by_height", Some(json!({ "height": number }))).await?;
hash_hex(&header.block_header.hash)
}
/// Get a block from the node by its hash.
/// This function does not verify the returned block actually has the hash in question.
pub async fn get_block(&self, hash: [u8; 32]) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await?;
let block = Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref())
.map_err(|_| RpcError::InvalidNode("invalid block".to_string()))?;
if block.hash() != hash {
Err(RpcError::InvalidNode("different block than requested (hash)".to_string()))?;
}
Ok(block)
}
pub async fn get_block_by_number(&self, number: usize) -> Result<Block, RpcError> {
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse =
self.json_rpc_call("get_block", Some(json!({ "height": number }))).await?;
let block = Block::read::<&[u8]>(&mut rpc_hex(&res.blob)?.as_ref())
.map_err(|_| RpcError::InvalidNode("invalid block".to_string()))?;
// Make sure this is actually the block for this number
match block.miner_tx.prefix.inputs.first() {
Some(Input::Gen(actual)) => {
if usize::try_from(*actual).unwrap() == number {
Ok(block)
} else {
Err(RpcError::InvalidNode("different block than requested (number)".to_string()))
}
}
_ => Err(RpcError::InvalidNode(
"block's miner_tx didn't have an input of kind Input::Gen".to_string(),
)),
}
}
pub async fn get_block_transactions(&self, hash: [u8; 32]) -> Result<Vec<Transaction>, RpcError> {
let block = self.get_block(hash).await?;
let mut res = vec![block.miner_tx];
res.extend(self.get_transactions(&block.txs).await?);
Ok(res)
}
pub async fn get_block_transactions_by_number(
&self,
number: usize,
) -> Result<Vec<Transaction>, RpcError> {
self.get_block_transactions(self.get_block_hash(number).await?).await
}
/// Get the output indexes of the specified transaction.
pub async fn get_o_indexes(&self, hash: [u8; 32]) -> Result<Vec<u64>, RpcError> {
/*
TODO: Use these when a suitable epee serde lib exists
#[derive(Serialize, Debug)]
struct Request {
txid: [u8; 32],
}
#[derive(Deserialize, Debug)]
struct OIndexes {
o_indexes: Vec<u64>,
}
*/
// Given the immaturity of Rust epee libraries, this is a homegrown one which is only validated
// to work against this specific function
// Header for EPEE, an 8-byte magic and a version
const EPEE_HEADER: &[u8] = b"\x01\x11\x01\x01\x01\x01\x02\x01\x01";
let mut request = EPEE_HEADER.to_vec();
// Number of fields (shifted over 2 bits as the 2 LSBs are reserved for metadata)
request.push(1 << 2);
// Length of field name
request.push(4);
// Field name
request.extend(b"txid");
// Type of field
request.push(10);
// Length of string, since this byte array is technically a string
request.push(32 << 2);
// The "string"
request.extend(hash);
let indexes_buf = self.bin_call("get_o_indexes.bin", request).await?;
let mut indexes: &[u8] = indexes_buf.as_ref();
(|| {
let mut res = None;
let mut is_okay = false;
if read_bytes::<_, { EPEE_HEADER.len() }>(&mut indexes)? != EPEE_HEADER {
Err(io::Error::other("invalid header"))?;
}
let read_object = |reader: &mut &[u8]| -> io::Result<Vec<u64>> {
let fields = read_byte(reader)? >> 2;
for _ in 0 .. fields {
let name_len = read_byte(reader)?;
let name = read_raw_vec(read_byte, name_len.into(), reader)?;
let type_with_array_flag = read_byte(reader)?;
let kind = type_with_array_flag & (!0x80);
let iters = if type_with_array_flag != kind { read_epee_vi(reader)? } else { 1 };
if (&name == b"o_indexes") && (kind != 5) {
Err(io::Error::other("o_indexes weren't u64s"))?;
}
let f = match kind {
// i64
1 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// i32
2 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// i16
3 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// i8
4 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// u64
5 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// u32
6 => |reader: &mut &[u8]| read_raw_vec(read_byte, 4, reader),
// u16
7 => |reader: &mut &[u8]| read_raw_vec(read_byte, 2, reader),
// u8
8 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// double
9 => |reader: &mut &[u8]| read_raw_vec(read_byte, 8, reader),
// string, or any collection of bytes
10 => |reader: &mut &[u8]| {
let len = read_epee_vi(reader)?;
read_raw_vec(
read_byte,
len.try_into().map_err(|_| io::Error::other("u64 length exceeded usize"))?,
reader,
)
},
// bool
11 => |reader: &mut &[u8]| read_raw_vec(read_byte, 1, reader),
// object, errors here as it shouldn't be used on this call
12 => {
|_: &mut &[u8]| Err(io::Error::other("node used object in reply to get_o_indexes"))
}
// array, so far unused
13 => |_: &mut &[u8]| Err(io::Error::other("node used the unused array type")),
_ => |_: &mut &[u8]| Err(io::Error::other("node used an invalid type")),
};
let mut bytes_res = vec![];
for _ in 0 .. iters {
bytes_res.push(f(reader)?);
}
let mut actual_res = Vec::with_capacity(bytes_res.len());
match name.as_slice() {
b"o_indexes" => {
for o_index in bytes_res {
actual_res.push(u64::from_le_bytes(
o_index
.try_into()
.map_err(|_| io::Error::other("node didn't provide 8 bytes for a u64"))?,
));
}
res = Some(actual_res);
}
b"status" => {
if bytes_res
.first()
.ok_or_else(|| io::Error::other("status wasn't a string"))?
.as_slice() !=
b"OK"
{
// TODO: Better handle non-OK responses
Err(io::Error::other("response wasn't OK"))?;
}
is_okay = true;
}
_ => continue,
}
if is_okay && res.is_some() {
break;
}
}
// Didn't return a response with a status
// (if the status wasn't okay, we would've already errored)
if !is_okay {
Err(io::Error::other("response didn't contain a status"))?;
}
// If the Vec was empty, it would've been omitted, hence the unwrap_or
// TODO: Test against a 0-output TX, such as the ones found in block 202612
Ok(res.unwrap_or(vec![]))
};
read_object(&mut indexes)
})()
.map_err(|_| RpcError::InvalidNode("invalid binary response".to_string()))
}
/// Get the output distribution, from the specified height to the specified height (both
/// inclusive).
pub async fn get_output_distribution(
&self,
from: usize,
to: usize,
) -> Result<Vec<u64>, RpcError> {
#[derive(Deserialize, Debug)]
struct Distribution {
distribution: Vec<u64>,
}
#[derive(Deserialize, Debug)]
struct Distributions {
distributions: Vec<Distribution>,
}
let mut distributions: Distributions = self
.json_rpc_call(
"get_output_distribution",
Some(json!({
"binary": false,
"amounts": [0],
"cumulative": true,
"from_height": from,
"to_height": to,
})),
)
.await?;
Ok(distributions.distributions.swap_remove(0).distribution)
}
/// Get the specified outputs from the RingCT (zero-amount) pool
pub async fn get_outs(&self, indexes: &[u64]) -> Result<Vec<OutputResponse>, RpcError> {
#[derive(Deserialize, Debug)]
struct OutsResponse {
status: String,
outs: Vec<OutputResponse>,
}
let res: OutsResponse = self
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": 0,
"index": o
})).collect::<Vec<_>>()
})),
)
.await?;
if res.status != "OK" {
Err(RpcError::InvalidNode("bad response to get_outs".to_string()))?;
}
Ok(res.outs)
}
/// Get the specified outputs from the RingCT (zero-amount) pool, but only return them if their
/// timelock has been satisfied.
///
/// The timelock being satisfied is distinct from being free of the 10-block lock applied to all
/// Monero transactions.
pub async fn get_unlocked_outputs(
&self,
indexes: &[u64],
height: usize,
fingerprintable_canonical: bool,
) -> Result<Vec<Option<[EdwardsPoint; 2]>>, RpcError> {
let outs: Vec<OutputResponse> = self.get_outs(indexes).await?;
// Only need to fetch txs to do canonical check on timelock
let txs = if fingerprintable_canonical {
self
.get_transactions(
&outs.iter().map(|out| hash_hex(&out.txid)).collect::<Result<Vec<_>, _>>()?,
)
.await?
} else {
Vec::new()
};
// TODO: https://github.com/serai-dex/serai/issues/104
outs
.iter()
.enumerate()
.map(|(i, out)| {
// Allow keys to be invalid, though if they are, return None to trigger selection of a new
// decoy
// Only valid keys can be used in CLSAG proofs, hence the need for re-selection, yet
// invalid keys may honestly exist on the blockchain
// Only a recent hard fork checked output keys were valid points
let Some(key) = decompress_point(
rpc_hex(&out.key)?
.try_into()
.map_err(|_| RpcError::InvalidNode("non-32-byte point".to_string()))?,
) else {
return Ok(None);
};
Ok(Some([key, rpc_point(&out.mask)?]).filter(|_| {
if fingerprintable_canonical {
Timelock::Block(height) >= txs[i].prefix.timelock
} else {
out.unlocked
}
}))
})
.collect()
}
async fn get_fee_v14(&self, priority: FeePriority) -> Result<Fee, RpcError> {
#[derive(Deserialize, Debug)]
struct FeeResponseV14 {
status: String,
fee: u64,
quantization_mask: u64,
}
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7569-L7584
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7660-L7661
let priority_idx =
usize::try_from(if priority.fee_priority() == 0 { 1 } else { priority.fee_priority() - 1 })
.map_err(|_| RpcError::InvalidPriority)?;
let multipliers = [1, 5, 25, 1000];
if priority_idx >= multipliers.len() {
// though not an RPC error, it seems sensible to treat as such
Err(RpcError::InvalidPriority)?;
}
let fee_multiplier = multipliers[priority_idx];
let res: FeeResponseV14 = self
.json_rpc_call(
"get_fee_estimate",
Some(json!({ "grace_blocks": GRACE_BLOCKS_FOR_FEE_ESTIMATE })),
)
.await?;
if res.status != "OK" {
Err(RpcError::InvalidFee)?;
}
Ok(Fee { per_weight: res.fee * fee_multiplier, mask: res.quantization_mask })
}
/// Get the currently estimated fee from the node.
///
/// This may be manipulated to unsafe levels and MUST be sanity checked.
// TODO: Take a sanity check argument
pub async fn get_fee(&self, protocol: Protocol, priority: FeePriority) -> Result<Fee, RpcError> {
// TODO: Implement wallet2's adjust_priority which by default automatically uses a lower
// priority than provided depending on the backlog in the pool
if protocol.v16_fee() {
#[derive(Deserialize, Debug)]
struct FeeResponse {
status: String,
fees: Vec<u64>,
quantization_mask: u64,
}
let res: FeeResponse = self
.json_rpc_call(
"get_fee_estimate",
Some(json!({ "grace_blocks": GRACE_BLOCKS_FOR_FEE_ESTIMATE })),
)
.await?;
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/wallet/wallet2.cpp#L7615-L7620
let priority_idx = usize::try_from(if priority.fee_priority() >= 4 {
3
} else {
priority.fee_priority().saturating_sub(1)
})
.map_err(|_| RpcError::InvalidPriority)?;
if res.status != "OK" {
Err(RpcError::InvalidFee)
} else if priority_idx >= res.fees.len() {
Err(RpcError::InvalidPriority)
} else {
Ok(Fee { per_weight: res.fees[priority_idx], mask: res.quantization_mask })
}
} else {
self.get_fee_v14(priority).await
}
}
pub async fn publish_transaction(&self, tx: &Transaction) -> Result<(), RpcError> {
#[allow(dead_code)]
#[derive(Deserialize, Debug)]
struct SendRawResponse {
status: String,
double_spend: bool,
fee_too_low: bool,
invalid_input: bool,
invalid_output: bool,
low_mixin: bool,
not_relayed: bool,
overspend: bool,
too_big: bool,
too_few_outputs: bool,
reason: String,
}
let res: SendRawResponse = self
.rpc_call("send_raw_transaction", Some(json!({ "tx_as_hex": hex::encode(tx.serialize()) })))
.await?;
if res.status != "OK" {
Err(RpcError::InvalidTransaction(tx.hash()))?;
}
Ok(())
}
// TODO: Take &Address, not &str?
pub async fn generate_blocks(
&self,
address: &str,
block_count: usize,
) -> Result<(Vec<[u8; 32]>, usize), RpcError> {
#[derive(Debug, Deserialize)]
struct BlocksResponse {
blocks: Vec<String>,
height: usize,
}
let res = self
.json_rpc_call::<BlocksResponse>(
"generateblocks",
Some(json!({
"wallet_address": address,
"amount_of_blocks": block_count
})),
)
.await?;
let mut blocks = Vec::with_capacity(res.blocks.len());
for block in res.blocks {
blocks.push(hash_hex(&block)?);
}
Ok((blocks, res.height))
}
}

View File

@@ -1,172 +0,0 @@
use core::fmt::Debug;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
use monero_generators::decompress_point;
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
mod sealed {
pub trait VarInt: TryInto<u64> + TryFrom<u64> + Copy {
const BITS: usize;
}
impl VarInt for u8 {
const BITS: usize = 8;
}
impl VarInt for u32 {
const BITS: usize = 32;
}
impl VarInt for u64 {
const BITS: usize = 64;
}
impl VarInt for usize {
const BITS: usize = core::mem::size_of::<usize>() * 8;
}
}
// This will panic if the VarInt exceeds u64::MAX
pub(crate) fn varint_len<U: sealed::VarInt>(varint: U) -> usize {
let varint_u64: u64 = varint.try_into().map_err(|_| "varint exceeded u64").unwrap();
((usize::try_from(u64::BITS - varint_u64.leading_zeros()).unwrap().saturating_sub(1)) / 7) + 1
}
pub(crate) fn write_byte<W: Write>(byte: &u8, w: &mut W) -> io::Result<()> {
w.write_all(&[*byte])
}
// This will panic if the VarInt exceeds u64::MAX
pub(crate) fn write_varint<W: Write, U: sealed::VarInt>(varint: &U, w: &mut W) -> io::Result<()> {
let mut varint: u64 = (*varint).try_into().map_err(|_| "varint exceeded u64").unwrap();
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
if varint != 0 {
b |= VARINT_CONTINUATION_MASK;
}
write_byte(&b, w)?;
varint != 0
} {}
Ok(())
}
pub(crate) fn write_scalar<W: Write>(scalar: &Scalar, w: &mut W) -> io::Result<()> {
w.write_all(&scalar.to_bytes())
}
pub(crate) fn write_point<W: Write>(point: &EdwardsPoint, w: &mut W) -> io::Result<()> {
w.write_all(&point.compress().to_bytes())
}
pub(crate) fn write_raw_vec<T, W: Write, F: Fn(&T, &mut W) -> io::Result<()>>(
f: F,
values: &[T],
w: &mut W,
) -> io::Result<()> {
for value in values {
f(value, w)?;
}
Ok(())
}
pub(crate) fn write_vec<T, W: Write, F: Fn(&T, &mut W) -> io::Result<()>>(
f: F,
values: &[T],
w: &mut W,
) -> io::Result<()> {
write_varint(&values.len(), w)?;
write_raw_vec(f, values, w)
}
pub(crate) fn read_bytes<R: Read, const N: usize>(r: &mut R) -> io::Result<[u8; N]> {
let mut res = [0; N];
r.read_exact(&mut res)?;
Ok(res)
}
pub(crate) fn read_byte<R: Read>(r: &mut R) -> io::Result<u8> {
Ok(read_bytes::<_, 1>(r)?[0])
}
pub(crate) fn read_u16<R: Read>(r: &mut R) -> io::Result<u16> {
read_bytes(r).map(u16::from_le_bytes)
}
pub(crate) fn read_u32<R: Read>(r: &mut R) -> io::Result<u32> {
read_bytes(r).map(u32::from_le_bytes)
}
pub(crate) fn read_u64<R: Read>(r: &mut R) -> io::Result<u64> {
read_bytes(r).map(u64::from_le_bytes)
}
pub(crate) fn read_varint<R: Read, U: sealed::VarInt>(r: &mut R) -> io::Result<U> {
let mut bits = 0;
let mut res = 0;
while {
let b = read_byte(r)?;
if (bits != 0) && (b == 0) {
Err(io::Error::other("non-canonical varint"))?;
}
if ((bits + 7) >= U::BITS) && (b >= (1 << (U::BITS - bits))) {
Err(io::Error::other("varint overflow"))?;
}
res += u64::from(b & (!VARINT_CONTINUATION_MASK)) << bits;
bits += 7;
b & VARINT_CONTINUATION_MASK == VARINT_CONTINUATION_MASK
} {}
res.try_into().map_err(|_| io::Error::other("VarInt does not fit into integer type"))
}
// All scalar fields supported by monero-serai are checked to be canonical for valid transactions
// While from_bytes_mod_order would be more flexible, it's not currently needed and would be
// inaccurate to include now. While casting a wide net may be preferable, it'd also be inaccurate
// for now. There's also further edge cases as noted by
// https://github.com/monero-project/monero/issues/8438, where some scalars had an archaic
// reduction applied
pub(crate) fn read_scalar<R: Read>(r: &mut R) -> io::Result<Scalar> {
Option::from(Scalar::from_canonical_bytes(read_bytes(r)?))
.ok_or_else(|| io::Error::other("unreduced scalar"))
}
pub(crate) fn read_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
let bytes = read_bytes(r)?;
decompress_point(bytes).ok_or_else(|| io::Error::other("invalid point"))
}
pub(crate) fn read_torsion_free_point<R: Read>(r: &mut R) -> io::Result<EdwardsPoint> {
read_point(r)
.ok()
.filter(EdwardsPoint::is_torsion_free)
.ok_or_else(|| io::Error::other("invalid point"))
}
pub(crate) fn read_raw_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
len: usize,
r: &mut R,
) -> io::Result<Vec<T>> {
let mut res = vec![];
for _ in 0 .. len {
res.push(f(r)?);
}
Ok(res)
}
pub(crate) fn read_array<R: Read, T: Debug, F: Fn(&mut R) -> io::Result<T>, const N: usize>(
f: F,
r: &mut R,
) -> io::Result<[T; N]> {
read_raw_vec(f, N, r).map(|vec| vec.try_into().unwrap())
}
pub(crate) fn read_vec<R: Read, T, F: Fn(&mut R) -> io::Result<T>>(
f: F,
r: &mut R,
) -> io::Result<Vec<T>> {
read_raw_vec(f, read_varint(r)?, r)
}

View File

@@ -1,176 +0,0 @@
use hex_literal::hex;
use rand_core::{RngCore, OsRng};
use curve25519_dalek::constants::ED25519_BASEPOINT_TABLE;
use monero_generators::decompress_point;
use crate::{
random_scalar,
wallet::address::{Network, AddressType, AddressMeta, MoneroAddress},
};
const SPEND: [u8; 32] = hex!("f8631661f6ab4e6fda310c797330d86e23a682f20d5bc8cc27b18051191f16d7");
const VIEW: [u8; 32] = hex!("4a1535063ad1fee2dabbf909d4fd9a873e29541b401f0944754e17c9a41820ce");
const STANDARD: &str =
"4B33mFPMq6mKi7Eiyd5XuyKRVMGVZz1Rqb9ZTyGApXW5d1aT7UBDZ89ewmnWFkzJ5wPd2SFbn313vCT8a4E2Qf4KQH4pNey";
const PAYMENT_ID: [u8; 8] = hex!("b8963a57855cf73f");
const INTEGRATED: &str =
"4Ljin4CrSNHKi7Eiyd5XuyKRVMGVZz1Rqb9ZTyGApXW5d1aT7UBDZ89ewmnWFkzJ5wPd2SFbn313vCT8a4E2Qf4KbaTH6Mn\
pXSn88oBX35";
const SUB_SPEND: [u8; 32] =
hex!("fe358188b528335ad1cfdc24a22a23988d742c882b6f19a602892eaab3c1b62b");
const SUB_VIEW: [u8; 32] = hex!("9bc2b464de90d058468522098d5610c5019c45fd1711a9517db1eea7794f5470");
const SUBADDRESS: &str =
"8C5zHM5ud8nGC4hC2ULiBLSWx9infi8JUUmWEat4fcTf8J4H38iWYVdFmPCA9UmfLTZxD43RsyKnGEdZkoGij6csDeUnbEB";
const FEATURED_JSON: &str = include_str!("vectors/featured_addresses.json");
#[test]
fn standard_address() {
let addr = MoneroAddress::from_str(Network::Mainnet, STANDARD).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Standard);
assert!(!addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), None);
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SPEND);
assert_eq!(addr.view.compress().to_bytes(), VIEW);
assert_eq!(addr.to_string(), STANDARD);
}
#[test]
fn integrated_address() {
let addr = MoneroAddress::from_str(Network::Mainnet, INTEGRATED).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Integrated(PAYMENT_ID));
assert!(!addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), Some(PAYMENT_ID));
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SPEND);
assert_eq!(addr.view.compress().to_bytes(), VIEW);
assert_eq!(addr.to_string(), INTEGRATED);
}
#[test]
fn subaddress() {
let addr = MoneroAddress::from_str(Network::Mainnet, SUBADDRESS).unwrap();
assert_eq!(addr.meta.network, Network::Mainnet);
assert_eq!(addr.meta.kind, AddressType::Subaddress);
assert!(addr.meta.kind.is_subaddress());
assert_eq!(addr.meta.kind.payment_id(), None);
assert!(!addr.meta.kind.is_guaranteed());
assert_eq!(addr.spend.compress().to_bytes(), SUB_SPEND);
assert_eq!(addr.view.compress().to_bytes(), SUB_VIEW);
assert_eq!(addr.to_string(), SUBADDRESS);
}
#[test]
fn featured() {
for (network, first) in
[(Network::Mainnet, 'C'), (Network::Testnet, 'K'), (Network::Stagenet, 'F')]
{
for _ in 0 .. 100 {
let spend = &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE;
let view = &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE;
for features in 0 .. (1 << 3) {
const SUBADDRESS_FEATURE_BIT: u8 = 1;
const INTEGRATED_FEATURE_BIT: u8 = 1 << 1;
const GUARANTEED_FEATURE_BIT: u8 = 1 << 2;
let subaddress = (features & SUBADDRESS_FEATURE_BIT) == SUBADDRESS_FEATURE_BIT;
let mut payment_id = [0; 8];
OsRng.fill_bytes(&mut payment_id);
let payment_id = Some(payment_id)
.filter(|_| (features & INTEGRATED_FEATURE_BIT) == INTEGRATED_FEATURE_BIT);
let guaranteed = (features & GUARANTEED_FEATURE_BIT) == GUARANTEED_FEATURE_BIT;
let kind = AddressType::Featured { subaddress, payment_id, guaranteed };
let meta = AddressMeta::new(network, kind);
let addr = MoneroAddress::new(meta, spend, view);
assert_eq!(addr.to_string().chars().next().unwrap(), first);
assert_eq!(MoneroAddress::from_str(network, &addr.to_string()).unwrap(), addr);
assert_eq!(addr.spend, spend);
assert_eq!(addr.view, view);
assert_eq!(addr.is_subaddress(), subaddress);
assert_eq!(addr.payment_id(), payment_id);
assert_eq!(addr.is_guaranteed(), guaranteed);
}
}
}
}
#[test]
fn featured_vectors() {
#[derive(serde::Deserialize)]
struct Vector {
address: String,
network: String,
spend: String,
view: String,
subaddress: bool,
integrated: bool,
payment_id: Option<[u8; 8]>,
guaranteed: bool,
}
let vectors = serde_json::from_str::<Vec<Vector>>(FEATURED_JSON).unwrap();
for vector in vectors {
let first = vector.address.chars().next().unwrap();
let network = match vector.network.as_str() {
"Mainnet" => {
assert_eq!(first, 'C');
Network::Mainnet
}
"Testnet" => {
assert_eq!(first, 'K');
Network::Testnet
}
"Stagenet" => {
assert_eq!(first, 'F');
Network::Stagenet
}
_ => panic!("Unknown network"),
};
let spend = decompress_point(hex::decode(vector.spend).unwrap().try_into().unwrap()).unwrap();
let view = decompress_point(hex::decode(vector.view).unwrap().try_into().unwrap()).unwrap();
let addr = MoneroAddress::from_str(network, &vector.address).unwrap();
assert_eq!(addr.spend, spend);
assert_eq!(addr.view, view);
assert_eq!(addr.is_subaddress(), vector.subaddress);
assert_eq!(vector.integrated, vector.payment_id.is_some());
assert_eq!(addr.payment_id(), vector.payment_id);
assert_eq!(addr.is_guaranteed(), vector.guaranteed);
assert_eq!(
MoneroAddress::new(
AddressMeta::new(
network,
AddressType::Featured {
subaddress: vector.subaddress,
payment_id: vector.payment_id,
guaranteed: vector.guaranteed
}
),
spend,
view
)
.to_string(),
vector.address
);
}
}

View File

@@ -1,95 +0,0 @@
use hex_literal::hex;
use rand_core::OsRng;
use curve25519_dalek::scalar::Scalar;
use monero_generators::decompress_point;
use multiexp::BatchVerifier;
use crate::{
Commitment, random_scalar,
ringct::bulletproofs::{Bulletproofs, original::OriginalStruct},
};
mod plus;
#[test]
fn bulletproofs_vector() {
let scalar = |scalar| Scalar::from_canonical_bytes(scalar).unwrap();
let point = |point| decompress_point(point).unwrap();
// Generated from Monero
assert!(Bulletproofs::Original(OriginalStruct {
A: point(hex!("ef32c0b9551b804decdcb107eb22aa715b7ce259bf3c5cac20e24dfa6b28ac71")),
S: point(hex!("e1285960861783574ee2b689ae53622834eb0b035d6943103f960cd23e063fa0")),
T1: point(hex!("4ea07735f184ba159d0e0eb662bac8cde3eb7d39f31e567b0fbda3aa23fe5620")),
T2: point(hex!("b8390aa4b60b255630d40e592f55ec6b7ab5e3a96bfcdcd6f1cd1d2fc95f441e")),
taux: scalar(hex!("5957dba8ea9afb23d6e81cc048a92f2d502c10c749dc1b2bd148ae8d41ec7107")),
mu: scalar(hex!("923023b234c2e64774b820b4961f7181f6c1dc152c438643e5a25b0bf271bc02")),
L: vec![
point(hex!("c45f656316b9ebf9d357fb6a9f85b5f09e0b991dd50a6e0ae9b02de3946c9d99")),
point(hex!("9304d2bf0f27183a2acc58cc755a0348da11bd345485fda41b872fee89e72aac")),
point(hex!("1bb8b71925d155dd9569f64129ea049d6149fdc4e7a42a86d9478801d922129b")),
point(hex!("5756a7bf887aa72b9a952f92f47182122e7b19d89e5dd434c747492b00e1c6b7")),
point(hex!("6e497c910d102592830555356af5ff8340e8d141e3fb60ea24cfa587e964f07d")),
point(hex!("f4fa3898e7b08e039183d444f3d55040f3c790ed806cb314de49f3068bdbb218")),
point(hex!("0bbc37597c3ead517a3841e159c8b7b79a5ceaee24b2a9a20350127aab428713")),
],
R: vec![
point(hex!("609420ba1702781692e84accfd225adb3d077aedc3cf8125563400466b52dbd9")),
point(hex!("fb4e1d079e7a2b0ec14f7e2a3943bf50b6d60bc346a54fcf562fb234b342abf8")),
point(hex!("6ae3ac97289c48ce95b9c557289e82a34932055f7f5e32720139824fe81b12e5")),
point(hex!("d071cc2ffbdab2d840326ad15f68c01da6482271cae3cf644670d1632f29a15c")),
point(hex!("e52a1754b95e1060589ba7ce0c43d0060820ebfc0d49dc52884bc3c65ad18af5")),
point(hex!("41573b06140108539957df71aceb4b1816d2409ce896659aa5c86f037ca5e851")),
point(hex!("a65970b2cc3c7b08b2b5b739dbc8e71e646783c41c625e2a5b1535e3d2e0f742")),
],
a: scalar(hex!("0077c5383dea44d3cd1bc74849376bd60679612dc4b945255822457fa0c0a209")),
b: scalar(hex!("fe80cf5756473482581e1d38644007793ddc66fdeb9404ec1689a907e4863302")),
t: scalar(hex!("40dfb08e09249040df997851db311bd6827c26e87d6f0f332c55be8eef10e603"))
})
.verify(
&mut OsRng,
&[
// For some reason, these vectors are * INV_EIGHT
point(hex!("8e8f23f315edae4f6c2f948d9a861e0ae32d356b933cd11d2f0e031ac744c41f"))
.mul_by_cofactor(),
point(hex!("2829cbd025aa54cd6e1b59a032564f22f0b2e5627f7f2c4297f90da438b5510f"))
.mul_by_cofactor(),
]
));
}
macro_rules! bulletproofs_tests {
($name: ident, $max: ident, $plus: literal) => {
#[test]
fn $name() {
// Create Bulletproofs for all possible output quantities
let mut verifier = BatchVerifier::new(16);
for i in 1 ..= 16 {
let commitments = (1 ..= i)
.map(|i| Commitment::new(random_scalar(&mut OsRng), u64::try_from(i).unwrap()))
.collect::<Vec<_>>();
let bp = Bulletproofs::prove(&mut OsRng, &commitments, $plus).unwrap();
let commitments = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
assert!(bp.verify(&mut OsRng, &commitments));
assert!(bp.batch_verify(&mut OsRng, &mut verifier, i, &commitments));
}
assert!(verifier.verify_vartime());
}
#[test]
fn $max() {
// Check Bulletproofs errors if we try to prove for too many outputs
let mut commitments = vec![];
for _ in 0 .. 17 {
commitments.push(Commitment::new(Scalar::ZERO, 0));
}
assert!(Bulletproofs::prove(&mut OsRng, &commitments, $plus).is_err());
}
};
}
bulletproofs_tests!(bulletproofs, bulletproofs_max, false);
bulletproofs_tests!(bulletproofs_plus, bulletproofs_plus_max, true);

View File

@@ -1,30 +0,0 @@
use rand_core::{RngCore, OsRng};
use multiexp::BatchVerifier;
use group::ff::Field;
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::{
Commitment,
ringct::bulletproofs::plus::aggregate_range_proof::{
AggregateRangeStatement, AggregateRangeWitness,
},
};
#[test]
fn test_aggregate_range_proof() {
let mut verifier = BatchVerifier::new(16);
for m in 1 ..= 16 {
let mut commitments = vec![];
for _ in 0 .. m {
commitments.push(Commitment::new(*Scalar::random(&mut OsRng), OsRng.next_u64()));
}
let commitment_points = commitments.iter().map(|com| EdwardsPoint(com.calculate())).collect();
let statement = AggregateRangeStatement::new(commitment_points).unwrap();
let witness = AggregateRangeWitness::new(&commitments).unwrap();
let proof = statement.clone().prove(&mut OsRng, &witness).unwrap();
statement.verify(&mut OsRng, &mut verifier, (), proof);
}
assert!(verifier.verify_vartime());
}

View File

@@ -1,4 +0,0 @@
#[cfg(test)]
mod weighted_inner_product;
#[cfg(test)]
mod aggregate_range_proof;

View File

@@ -1,81 +0,0 @@
// The inner product relation is P = sum(g_bold * a, h_bold * b, g * (a * y * b), h * alpha)
use rand_core::OsRng;
use multiexp::BatchVerifier;
use group::{ff::Field, Group};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::ringct::bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators,
weighted_inner_product::{WipStatement, WipWitness},
};
#[test]
fn test_zero_weighted_inner_product() {
#[allow(non_snake_case)]
let P = EdwardsPoint::identity();
let y = Scalar::random(&mut OsRng);
let generators = Generators::new().reduce(1);
let statement = WipStatement::new(generators, P, y);
let witness = WipWitness::new(ScalarVector::new(1), ScalarVector::new(1), Scalar::ZERO).unwrap();
let transcript = Scalar::random(&mut OsRng);
let proof = statement.clone().prove(&mut OsRng, transcript, &witness).unwrap();
let mut verifier = BatchVerifier::new(1);
statement.verify(&mut OsRng, &mut verifier, (), transcript, proof);
assert!(verifier.verify_vartime());
}
#[test]
fn test_weighted_inner_product() {
// P = sum(g_bold * a, h_bold * b, g * (a * y * b), h * alpha)
let mut verifier = BatchVerifier::new(6);
let generators = Generators::new();
for i in [1, 2, 4, 8, 16, 32] {
let generators = generators.reduce(i);
let g = Generators::g();
let h = Generators::h();
assert_eq!(generators.len(), i);
let mut g_bold = vec![];
let mut h_bold = vec![];
for i in 0 .. i {
g_bold.push(generators.generator(GeneratorsList::GBold1, i));
h_bold.push(generators.generator(GeneratorsList::HBold1, i));
}
let g_bold = PointVector(g_bold);
let h_bold = PointVector(h_bold);
let mut a = ScalarVector::new(i);
let mut b = ScalarVector::new(i);
let alpha = Scalar::random(&mut OsRng);
let y = Scalar::random(&mut OsRng);
let mut y_vec = ScalarVector::new(g_bold.len());
y_vec[0] = y;
for i in 1 .. y_vec.len() {
y_vec[i] = y_vec[i - 1] * y;
}
for i in 0 .. i {
a[i] = Scalar::random(&mut OsRng);
b[i] = Scalar::random(&mut OsRng);
}
#[allow(non_snake_case)]
let P = g_bold.multiexp(&a) +
h_bold.multiexp(&b) +
(g * a.clone().weighted_inner_product(&b, &y_vec)) +
(h * alpha);
let statement = WipStatement::new(generators, P, y);
let witness = WipWitness::new(a, b, alpha).unwrap();
let transcript = Scalar::random(&mut OsRng);
let proof = statement.clone().prove(&mut OsRng, transcript, &witness).unwrap();
statement.verify(&mut OsRng, &mut verifier, (), transcript, proof);
}
assert!(verifier.verify_vartime());
}

View File

@@ -1,127 +0,0 @@
use core::ops::Deref;
#[cfg(feature = "multisig")]
use std::sync::{Arc, RwLock};
use zeroize::Zeroizing;
use rand_core::{RngCore, OsRng};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar};
#[cfg(feature = "multisig")]
use transcript::{Transcript, RecommendedTranscript};
#[cfg(feature = "multisig")]
use frost::curve::Ed25519;
use crate::{
Commitment, random_scalar,
wallet::Decoys,
ringct::{
generate_key_image,
clsag::{ClsagInput, Clsag},
},
};
#[cfg(feature = "multisig")]
use crate::ringct::clsag::{ClsagDetails, ClsagMultisig};
#[cfg(feature = "multisig")]
use frost::{
Participant,
tests::{key_gen, algorithm_machines, sign},
};
const RING_LEN: u64 = 11;
const AMOUNT: u64 = 1337;
#[cfg(feature = "multisig")]
const RING_INDEX: u8 = 3;
#[test]
fn clsag() {
for real in 0 .. RING_LEN {
let msg = [1; 32];
let mut secrets = (Zeroizing::new(Scalar::ZERO), Scalar::ZERO);
let mut ring = vec![];
for i in 0 .. RING_LEN {
let dest = Zeroizing::new(random_scalar(&mut OsRng));
let mask = random_scalar(&mut OsRng);
let amount;
if i == real {
secrets = (dest.clone(), mask);
amount = AMOUNT;
} else {
amount = OsRng.next_u64();
}
ring
.push([dest.deref() * ED25519_BASEPOINT_TABLE, Commitment::new(mask, amount).calculate()]);
}
let image = generate_key_image(&secrets.0);
let (clsag, pseudo_out) = Clsag::sign(
&mut OsRng,
vec![(
secrets.0,
image,
ClsagInput::new(
Commitment::new(secrets.1, AMOUNT),
Decoys {
i: u8::try_from(real).unwrap(),
offsets: (1 ..= RING_LEN).collect(),
ring: ring.clone(),
},
)
.unwrap(),
)],
random_scalar(&mut OsRng),
msg,
)
.swap_remove(0);
clsag.verify(&ring, &image, &pseudo_out, &msg).unwrap();
}
}
#[cfg(feature = "multisig")]
#[test]
fn clsag_multisig() {
let keys = key_gen::<_, Ed25519>(&mut OsRng);
let randomness = random_scalar(&mut OsRng);
let mut ring = vec![];
for i in 0 .. RING_LEN {
let dest;
let mask;
let amount;
if i != u64::from(RING_INDEX) {
dest = &random_scalar(&mut OsRng) * ED25519_BASEPOINT_TABLE;
mask = random_scalar(&mut OsRng);
amount = OsRng.next_u64();
} else {
dest = keys[&Participant::new(1).unwrap()].group_key().0;
mask = randomness;
amount = AMOUNT;
}
ring.push([dest, Commitment::new(mask, amount).calculate()]);
}
let mask_sum = random_scalar(&mut OsRng);
let algorithm = ClsagMultisig::new(
RecommendedTranscript::new(b"Monero Serai CLSAG Test"),
keys[&Participant::new(1).unwrap()].group_key().0,
Arc::new(RwLock::new(Some(ClsagDetails::new(
ClsagInput::new(
Commitment::new(randomness, AMOUNT),
Decoys { i: RING_INDEX, offsets: (1 ..= RING_LEN).collect(), ring: ring.clone() },
)
.unwrap(),
mask_sum,
)))),
);
sign(
&mut OsRng,
&algorithm,
keys.clone(),
algorithm_machines(&mut OsRng, &algorithm, &keys),
&[1; 32],
);
}

View File

@@ -1,158 +0,0 @@
use crate::{
wallet::{ExtraField, Extra, extra::MAX_TX_EXTRA_PADDING_COUNT},
serialize::write_varint,
};
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
// Borrowed tests from
// https://github.com/monero-project/monero/blob/ac02af92867590ca80b2779a7bbeafa99ff94dcb/
// tests/unit_tests/test_tx_utils.cpp
const PUB_KEY_BYTES: [u8; 33] = [
1, 30, 208, 98, 162, 133, 64, 85, 83, 112, 91, 188, 89, 211, 24, 131, 39, 154, 22, 228, 80, 63,
198, 141, 173, 111, 244, 183, 4, 149, 186, 140, 230,
];
fn pub_key() -> EdwardsPoint {
CompressedEdwardsY(PUB_KEY_BYTES[1 .. PUB_KEY_BYTES.len()].try_into().expect("invalid pub key"))
.decompress()
.unwrap()
}
fn test_write_buf(extra: &Extra, buf: &[u8]) {
let mut w: Vec<u8> = vec![];
Extra::write(extra, &mut w).unwrap();
assert_eq!(buf, w);
}
#[test]
fn empty_extra() {
let buf: Vec<u8> = vec![];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert!(extra.0.is_empty());
test_write_buf(&extra, &buf);
}
#[test]
fn padding_only_size_1() {
let buf: Vec<u8> = vec![0];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::Padding(1)]);
test_write_buf(&extra, &buf);
}
#[test]
fn padding_only_size_2() {
let buf: Vec<u8> = vec![0, 0];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::Padding(2)]);
test_write_buf(&extra, &buf);
}
#[test]
fn padding_only_max_size() {
let buf: Vec<u8> = vec![0; MAX_TX_EXTRA_PADDING_COUNT];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::Padding(MAX_TX_EXTRA_PADDING_COUNT)]);
test_write_buf(&extra, &buf);
}
#[test]
fn padding_only_exceed_max_size() {
let buf: Vec<u8> = vec![0; MAX_TX_EXTRA_PADDING_COUNT + 1];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert!(extra.0.is_empty());
}
#[test]
fn invalid_padding_only() {
let buf: Vec<u8> = vec![0, 42];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert!(extra.0.is_empty());
}
#[test]
fn pub_key_only() {
let buf: Vec<u8> = PUB_KEY_BYTES.to_vec();
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::PublicKey(pub_key())]);
test_write_buf(&extra, &buf);
}
#[test]
fn extra_nonce_only() {
let buf: Vec<u8> = vec![2, 1, 42];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::Nonce(vec![42])]);
test_write_buf(&extra, &buf);
}
#[test]
fn extra_nonce_only_wrong_size() {
let mut buf: Vec<u8> = vec![0; 20];
buf[0] = 2;
buf[1] = 255;
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert!(extra.0.is_empty());
}
#[test]
fn pub_key_and_padding() {
let mut buf: Vec<u8> = PUB_KEY_BYTES.to_vec();
buf.extend([
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
]);
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::PublicKey(pub_key()), ExtraField::Padding(76)]);
test_write_buf(&extra, &buf);
}
#[test]
fn pub_key_and_invalid_padding() {
let mut buf: Vec<u8> = PUB_KEY_BYTES.to_vec();
buf.extend([0, 1]);
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::PublicKey(pub_key())]);
}
#[test]
fn extra_mysterious_minergate_only() {
let buf: Vec<u8> = vec![222, 1, 42];
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::MysteriousMinergate(vec![42])]);
test_write_buf(&extra, &buf);
}
#[test]
fn extra_mysterious_minergate_only_large() {
let mut buf: Vec<u8> = vec![222];
write_varint(&512u64, &mut buf).unwrap();
buf.extend_from_slice(&vec![0; 512]);
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(extra.0, vec![ExtraField::MysteriousMinergate(vec![0; 512])]);
test_write_buf(&extra, &buf);
}
#[test]
fn extra_mysterious_minergate_only_wrong_size() {
let mut buf: Vec<u8> = vec![0; 20];
buf[0] = 222;
buf[1] = 255;
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert!(extra.0.is_empty());
}
#[test]
fn extra_mysterious_minergate_and_pub_key() {
let mut buf: Vec<u8> = vec![222, 1, 42];
buf.extend(PUB_KEY_BYTES.to_vec());
let extra = Extra::read::<&[u8]>(&mut buf.as_ref()).unwrap();
assert_eq!(
extra.0,
vec![ExtraField::MysteriousMinergate(vec![42]), ExtraField::PublicKey(pub_key())]
);
test_write_buf(&extra, &buf);
}

View File

@@ -1,6 +0,0 @@
mod unreduced_scalar;
mod clsag;
mod bulletproofs;
mod address;
mod seed;
mod extra;

View File

@@ -1,482 +0,0 @@
use zeroize::Zeroizing;
use rand_core::OsRng;
use curve25519_dalek::scalar::Scalar;
use crate::{
hash,
wallet::seed::{
Seed, SeedType, SeedError,
classic::{self, trim_by_lang},
polyseed,
},
};
#[test]
fn test_classic_seed() {
struct Vector {
language: classic::Language,
seed: String,
spend: String,
view: String,
}
let vectors = [
Vector {
language: classic::Language::Chinese,
seed: "摇 曲 艺 武 滴 然 效 似 赏 式 祥 歌 买 疑 小 碧 堆 博 键 房 鲜 悲 付 喷 武".into(),
spend: "a5e4fff1706ef9212993a69f246f5c95ad6d84371692d63e9bb0ea112a58340d".into(),
view: "1176c43ce541477ea2f3ef0b49b25112b084e26b8a843e1304ac4677b74cdf02".into(),
},
Vector {
language: classic::Language::English,
seed: "washing thirsty occur lectures tuesday fainted toxic adapt \
abnormal memoir nylon mostly building shrugged online ember northern \
ruby woes dauntless boil family illness inroads northern"
.into(),
spend: "c0af65c0dd837e666b9d0dfed62745f4df35aed7ea619b2798a709f0fe545403".into(),
view: "513ba91c538a5a9069e0094de90e927c0cd147fa10428ce3ac1afd49f63e3b01".into(),
},
Vector {
language: classic::Language::Dutch,
seed: "setwinst riphagen vimmetje extase blief tuitelig fuiven meifeest \
ponywagen zesmaal ripdeal matverf codetaal leut ivoor rotten \
wisgerhof winzucht typograaf atrium rein zilt traktaat verzaagd setwinst"
.into(),
spend: "e2d2873085c447c2bc7664222ac8f7d240df3aeac137f5ff2022eaa629e5b10a".into(),
view: "eac30b69477e3f68093d131c7fd961564458401b07f8c87ff8f6030c1a0c7301".into(),
},
Vector {
language: classic::Language::French,
seed: "poids vaseux tarte bazar poivre effet entier nuance \
sensuel ennui pacte osselet poudre battre alibi mouton \
stade paquet pliage gibier type question position projet pliage"
.into(),
spend: "2dd39ff1a4628a94b5c2ec3e42fb3dfe15c2b2f010154dc3b3de6791e805b904".into(),
view: "6725b32230400a1032f31d622b44c3a227f88258939b14a7c72e00939e7bdf0e".into(),
},
Vector {
language: classic::Language::Spanish,
seed: "minero ocupar mirar evadir octubre cal logro miope \
opaco disco ancla litio clase cuello nasal clase \
fiar avance deseo mente grumo negro cordón croqueta clase"
.into(),
spend: "ae2c9bebdddac067d73ec0180147fc92bdf9ac7337f1bcafbbe57dd13558eb02".into(),
view: "18deafb34d55b7a43cae2c1c1c206a3c80c12cc9d1f84640b484b95b7fec3e05".into(),
},
Vector {
language: classic::Language::German,
seed: "Kaliber Gabelung Tapir Liveband Favorit Specht Enklave Nabel \
Jupiter Foliant Chronik nisten löten Vase Aussage Rekord \
Yeti Gesetz Eleganz Alraune Künstler Almweide Jahr Kastanie Almweide"
.into(),
spend: "79801b7a1b9796856e2397d862a113862e1fdc289a205e79d8d70995b276db06".into(),
view: "99f0ec556643bd9c038a4ed86edcb9c6c16032c4622ed2e000299d527a792701".into(),
},
Vector {
language: classic::Language::Italian,
seed: "cavo pancetta auto fulmine alleanza filmato diavolo prato \
forzare meritare litigare lezione segreto evasione votare buio \
licenza cliente dorso natale crescere vento tutelare vetta evasione"
.into(),
spend: "5e7fd774eb00fa5877e2a8b4dc9c7ffe111008a3891220b56a6e49ac816d650a".into(),
view: "698a1dce6018aef5516e82ca0cb3e3ec7778d17dfb41a137567bfa2e55e63a03".into(),
},
Vector {
language: classic::Language::Portuguese,
seed: "agito eventualidade onus itrio holograma sodomizar objetos dobro \
iugoslavo bcrepuscular odalisca abjeto iuane darwinista eczema acetona \
cibernetico hoquei gleba driver buffer azoto megera nogueira agito"
.into(),
spend: "13b3115f37e35c6aa1db97428b897e584698670c1b27854568d678e729200c0f".into(),
view: "ad1b4fd35270f5f36c4da7166672b347e75c3f4d41346ec2a06d1d0193632801".into(),
},
Vector {
language: classic::Language::Japanese,
seed: "ぜんぶ どうぐ おたがい せんきょ おうじ そんちょう じゅしん いろえんぴつ \
かほう つかれる えらぶ にちじょう くのう にちようび ぬまえび さんきゃく \
おおや ちぬき うすめる いがく せつでん さうな すいえい せつだん おおや"
.into(),
spend: "c56e895cdb13007eda8399222974cdbab493640663804b93cbef3d8c3df80b0b".into(),
view: "6c3634a313ec2ee979d565c33888fd7c3502d696ce0134a8bc1a2698c7f2c508".into(),
},
Vector {
language: classic::Language::Russian,
seed: "шатер икра нация ехать получать инерция доза реальный \
рыжий таможня лопата душа веселый клетка атлас лекция \
обгонять паек наивный лыжный дурак стать ежик задача паек"
.into(),
spend: "7cb5492df5eb2db4c84af20766391cd3e3662ab1a241c70fc881f3d02c381f05".into(),
view: "fcd53e41ec0df995ab43927f7c44bc3359c93523d5009fb3f5ba87431d545a03".into(),
},
Vector {
language: classic::Language::Esperanto,
seed: "ukazo klini peco etikedo fabriko imitado onklino urino \
pudro incidento kumuluso ikono smirgi hirundo uretro krii \
sparkado super speciala pupo alpinisto cvana vokegi zombio fabriko"
.into(),
spend: "82ebf0336d3b152701964ed41df6b6e9a035e57fc98b84039ed0bd4611c58904".into(),
view: "cd4d120e1ea34360af528f6a3e6156063312d9cefc9aa6b5218d366c0ed6a201".into(),
},
Vector {
language: classic::Language::Lojban,
seed: "jetnu vensa julne xrotu xamsi julne cutci dakli \
mlatu xedja muvgau palpi xindo sfubu ciste cinri \
blabi darno dembi janli blabi fenki bukpu burcu blabi"
.into(),
spend: "e4f8c6819ab6cf792cebb858caabac9307fd646901d72123e0367ebc0a79c200".into(),
view: "c806ce62bafaa7b2d597f1a1e2dbe4a2f96bfd804bf6f8420fc7f4a6bd700c00".into(),
},
Vector {
language: classic::Language::EnglishOld,
seed: "glorious especially puff son moment add youth nowhere \
throw glide grip wrong rhythm consume very swear \
bitter heavy eventually begin reason flirt type unable"
.into(),
spend: "647f4765b66b636ff07170ab6280a9a6804dfbaf19db2ad37d23be024a18730b".into(),
view: "045da65316a906a8c30046053119c18020b07a7a3a6ef5c01ab2a8755416bd02".into(),
},
// The following seeds require the language specification in order to calculate
// a single valid checksum
Vector {
language: classic::Language::Spanish,
seed: "pluma laico atraer pintor peor cerca balde buscar \
lancha batir nulo reloj resto gemelo nevera poder columna gol \
oveja latir amplio bolero feliz fuerza nevera"
.into(),
spend: "30303983fc8d215dd020cc6b8223793318d55c466a86e4390954f373fdc7200a".into(),
view: "97c649143f3c147ba59aa5506cc09c7992c5c219bb26964442142bf97980800e".into(),
},
Vector {
language: classic::Language::Spanish,
seed: "pluma pluma pluma pluma pluma pluma pluma pluma \
pluma pluma pluma pluma pluma pluma pluma pluma \
pluma pluma pluma pluma pluma pluma pluma pluma pluma"
.into(),
spend: "b4050000b4050000b4050000b4050000b4050000b4050000b4050000b4050000".into(),
view: "d73534f7912b395eb70ef911791a2814eb6df7ce56528eaaa83ff2b72d9f5e0f".into(),
},
Vector {
language: classic::Language::English,
seed: "plus plus plus plus plus plus plus plus \
plus plus plus plus plus plus plus plus \
plus plus plus plus plus plus plus plus plus"
.into(),
spend: "3b0400003b0400003b0400003b0400003b0400003b0400003b0400003b040000".into(),
view: "43a8a7715eed11eff145a2024ddcc39740255156da7bbd736ee66a0838053a02".into(),
},
Vector {
language: classic::Language::Spanish,
seed: "audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio audio"
.into(),
spend: "ba000000ba000000ba000000ba000000ba000000ba000000ba000000ba000000".into(),
view: "1437256da2c85d029b293d8c6b1d625d9374969301869b12f37186e3f906c708".into(),
},
Vector {
language: classic::Language::English,
seed: "audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio \
audio audio audio audio audio audio audio audio audio"
.into(),
spend: "7900000079000000790000007900000079000000790000007900000079000000".into(),
view: "20bec797ab96780ae6a045dd816676ca7ed1d7c6773f7022d03ad234b581d600".into(),
},
];
for vector in vectors {
let trim_seed = |seed: &str| {
seed
.split_whitespace()
.map(|word| trim_by_lang(word, vector.language))
.collect::<Vec<_>>()
.join(" ")
};
// Test against Monero
{
println!("{}. language: {:?}, seed: {}", line!(), vector.language, vector.seed.clone());
let seed =
Seed::from_string(SeedType::Classic(vector.language), Zeroizing::new(vector.seed.clone()))
.unwrap();
let trim = trim_seed(&vector.seed);
assert_eq!(
seed,
Seed::from_string(SeedType::Classic(vector.language), Zeroizing::new(trim)).unwrap()
);
let spend: [u8; 32] = hex::decode(vector.spend).unwrap().try_into().unwrap();
// For classical seeds, Monero directly uses the entropy as a spend key
assert_eq!(
Option::<Scalar>::from(Scalar::from_canonical_bytes(*seed.entropy())),
Option::<Scalar>::from(Scalar::from_canonical_bytes(spend)),
);
let view: [u8; 32] = hex::decode(vector.view).unwrap().try_into().unwrap();
// Monero then derives the view key as H(spend)
assert_eq!(
Scalar::from_bytes_mod_order(hash(&spend)),
Scalar::from_canonical_bytes(view).unwrap()
);
assert_eq!(
Seed::from_entropy(SeedType::Classic(vector.language), Zeroizing::new(spend), None)
.unwrap(),
seed
);
}
// Test against ourselves
{
let seed = Seed::new(&mut OsRng, SeedType::Classic(vector.language));
println!("{}. seed: {}", line!(), *seed.to_string());
let trim = trim_seed(&seed.to_string());
assert_eq!(
seed,
Seed::from_string(SeedType::Classic(vector.language), Zeroizing::new(trim)).unwrap()
);
assert_eq!(
seed,
Seed::from_entropy(SeedType::Classic(vector.language), seed.entropy(), None).unwrap()
);
assert_eq!(
seed,
Seed::from_string(SeedType::Classic(vector.language), seed.to_string()).unwrap()
);
}
}
}
#[test]
fn test_polyseed() {
struct Vector {
language: polyseed::Language,
seed: String,
entropy: String,
birthday: u64,
has_prefix: bool,
has_accent: bool,
}
let vectors = [
Vector {
language: polyseed::Language::English,
seed: "raven tail swear infant grief assist regular lamp \
duck valid someone little harsh puppy airport language"
.into(),
entropy: "dd76e7359a0ded37cd0ff0f3c829a5ae01673300000000000000000000000000".into(),
birthday: 1638446400,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Spanish,
seed: "eje fin parte célebre tabú pestaña lienzo puma \
prisión hora regalo lengua existir lápiz lote sonoro"
.into(),
entropy: "5a2b02df7db21fcbe6ec6df137d54c7b20fd2b00000000000000000000000000".into(),
birthday: 3118651200,
has_prefix: true,
has_accent: true,
},
Vector {
language: polyseed::Language::French,
seed: "valable arracher décaler jeudi amusant dresser mener épaissir risible \
prouesse réserve ampleur ajuster muter caméra enchère"
.into(),
entropy: "11cfd870324b26657342c37360c424a14a050b00000000000000000000000000".into(),
birthday: 1679314966,
has_prefix: true,
has_accent: true,
},
Vector {
language: polyseed::Language::Italian,
seed: "caduco midollo copione meninge isotopo illogico riflesso tartaruga fermento \
olandese normale tristezza episodio voragine forbito achille"
.into(),
entropy: "7ecc57c9b4652d4e31428f62bec91cfd55500600000000000000000000000000".into(),
birthday: 1679316358,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Portuguese,
seed: "caverna custear azedo adeus senador apertada sedoso omitir \
sujeito aurora videira molho cartaz gesso dentista tapar"
.into(),
entropy: "45473063711376cae38f1b3eba18c874124e1d00000000000000000000000000".into(),
birthday: 1679316657,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Czech,
seed: "usmrtit nora dotaz komunita zavalit funkce mzda sotva akce \
vesta kabel herna stodola uvolnit ustrnout email"
.into(),
entropy: "7ac8a4efd62d9c3c4c02e350d32326df37821c00000000000000000000000000".into(),
birthday: 1679316898,
has_prefix: true,
has_accent: false,
},
Vector {
language: polyseed::Language::Korean,
seed: "전망 선풍기 국제 무궁화 설사 기름 이론적 해안 절망 예선 \
지우개 보관 절망 말기 시각 귀신"
.into(),
entropy: "684663fda420298f42ed94b2c512ed38ddf12b00000000000000000000000000".into(),
birthday: 1679317073,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::Japanese,
seed: "うちあわせ ちつじょ つごう しはい けんこう とおる てみやげ はんとし たんとう \
といれ おさない おさえる むかう ぬぐう なふだ せまる"
.into(),
entropy: "94e6665518a6286c6e3ba508a2279eb62b771f00000000000000000000000000".into(),
birthday: 1679318722,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::ChineseTraditional,
seed: "亂 挖 斤 柄 代 圈 枝 轄 魯 論 函 開 勘 番 榮 壁".into(),
entropy: "b1594f585987ab0fd5a31da1f0d377dae5283f00000000000000000000000000".into(),
birthday: 1679426433,
has_prefix: false,
has_accent: false,
},
Vector {
language: polyseed::Language::ChineseSimplified,
seed: "啊 百 族 府 票 划 伪 仓 叶 虾 借 溜 晨 左 等 鬼".into(),
entropy: "21cdd366f337b89b8d1bc1df9fe73047c22b0300000000000000000000000000".into(),
birthday: 1679426817,
has_prefix: false,
has_accent: false,
},
// The following seed requires the language specification in order to calculate
// a single valid checksum
Vector {
language: polyseed::Language::Spanish,
seed: "impo sort usua cabi venu nobl oliv clim \
cont barr marc auto prod vaca torn fati"
.into(),
entropy: "dbfce25fe09b68a340e01c62417eeef43ad51800000000000000000000000000".into(),
birthday: 1701511650,
has_prefix: true,
has_accent: true,
},
];
for vector in vectors {
let add_whitespace = |mut seed: String| {
seed.push(' ');
seed
};
let seed_without_accents = |seed: &str| {
seed
.split_whitespace()
.map(|w| w.chars().filter(char::is_ascii).collect::<String>())
.collect::<Vec<_>>()
.join(" ")
};
let trim_seed = |seed: &str| {
let seed_to_trim =
if vector.has_accent { seed_without_accents(seed) } else { seed.to_string() };
seed_to_trim
.split_whitespace()
.map(|w| {
let mut ascii = 0;
let mut to_take = w.len();
for (i, char) in w.chars().enumerate() {
if char.is_ascii() {
ascii += 1;
}
if ascii == polyseed::PREFIX_LEN {
// +1 to include this character, which put us at the prefix length
to_take = i + 1;
break;
}
}
w.chars().take(to_take).collect::<String>()
})
.collect::<Vec<_>>()
.join(" ")
};
// String -> Seed
println!("{}. language: {:?}, seed: {}", line!(), vector.language, vector.seed.clone());
let seed =
Seed::from_string(SeedType::Polyseed(vector.language), Zeroizing::new(vector.seed.clone()))
.unwrap();
let trim = trim_seed(&vector.seed);
let add_whitespace = add_whitespace(vector.seed.clone());
let seed_without_accents = seed_without_accents(&vector.seed);
// Make sure a version with added whitespace still works
let whitespaced_seed =
Seed::from_string(SeedType::Polyseed(vector.language), Zeroizing::new(add_whitespace))
.unwrap();
assert_eq!(seed, whitespaced_seed);
// Check trimmed versions works
if vector.has_prefix {
let trimmed_seed =
Seed::from_string(SeedType::Polyseed(vector.language), Zeroizing::new(trim)).unwrap();
assert_eq!(seed, trimmed_seed);
}
// Check versions without accents work
if vector.has_accent {
let seed_without_accents = Seed::from_string(
SeedType::Polyseed(vector.language),
Zeroizing::new(seed_without_accents),
)
.unwrap();
assert_eq!(seed, seed_without_accents);
}
let entropy = Zeroizing::new(hex::decode(vector.entropy).unwrap().try_into().unwrap());
assert_eq!(seed.entropy(), entropy);
assert!(seed.birthday().abs_diff(vector.birthday) < polyseed::TIME_STEP);
// Entropy -> Seed
let from_entropy =
Seed::from_entropy(SeedType::Polyseed(vector.language), entropy, Some(seed.birthday()))
.unwrap();
assert_eq!(seed.to_string(), from_entropy.to_string());
// Check against ourselves
{
let seed = Seed::new(&mut OsRng, SeedType::Polyseed(vector.language));
println!("{}. seed: {}", line!(), *seed.to_string());
assert_eq!(
seed,
Seed::from_string(SeedType::Polyseed(vector.language), seed.to_string()).unwrap()
);
assert_eq!(
seed,
Seed::from_entropy(
SeedType::Polyseed(vector.language),
seed.entropy(),
Some(seed.birthday())
)
.unwrap()
);
}
}
}
#[test]
fn test_invalid_polyseed() {
// This seed includes unsupported features bits and should error on decode
let seed = "include domain claim resemble urban hire lunch bird \
crucial fire best wife ring warm ignore model"
.into();
let res =
Seed::from_string(SeedType::Polyseed(polyseed::Language::English), Zeroizing::new(seed));
assert_eq!(res, Err(SeedError::UnsupportedFeatures));
}

View File

@@ -1,32 +0,0 @@
use curve25519_dalek::scalar::Scalar;
use crate::unreduced_scalar::*;
#[test]
fn recover_scalars() {
let test_recover = |stored: &str, recovered: &str| {
let stored = UnreducedScalar(hex::decode(stored).unwrap().try_into().unwrap());
let recovered =
Scalar::from_canonical_bytes(hex::decode(recovered).unwrap().try_into().unwrap()).unwrap();
assert_eq!(stored.recover_monero_slide_scalar(), recovered);
};
// https://www.moneroinflation.com/static/data_py/report_scalars_df.pdf
// Table 4.
test_recover(
"cb2be144948166d0a9edb831ea586da0c376efa217871505ad77f6ff80f203f8",
"b8ffd6a1aee47828808ab0d4c8524cb5c376efa217871505ad77f6ff80f20308",
);
test_recover(
"343d3df8a1051c15a400649c423dc4ed58bef49c50caef6ca4a618b80dee22f4",
"21113355bc682e6d7a9d5b3f2137a30259bef49c50caef6ca4a618b80dee2204",
);
test_recover(
"c14f75d612800ca2c1dcfa387a42c9cc086c005bc94b18d204dd61342418eba7",
"4f473804b1d27ab2c789c80ab21d034a096c005bc94b18d204dd61342418eb07",
);
test_recover(
"000102030405060708090a0b0c0d0e0f826c4f6e2329a31bc5bc320af0b2bcbb",
"a124cfd387f461bf3719e03965ee6877826c4f6e2329a31bc5bc320af0b2bc0b",
);
}

View File

@@ -1,230 +0,0 @@
[
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5Jye2v3pYyUDn",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": false,
"integrated": false,
"guaranteed": false
},
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5Jye2v3wfMHCy",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": true,
"integrated": false,
"guaranteed": false
},
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5JyeeJTo4p5ayvj36PStM5AX",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": false,
"integrated": true,
"payment_id": [46, 48, 134, 34, 245, 148, 243, 195],
"guaranteed": false
},
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5JyeeJWv5WqMCNE2hRs9rJfy",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": true,
"integrated": true,
"payment_id": [153, 176, 98, 204, 151, 27, 197, 168],
"guaranteed": false
},
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5Jye2v4DwqwH1",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": false,
"integrated": false,
"guaranteed": true
},
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5Jye2v4Pyz8bD",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": true,
"integrated": false,
"guaranteed": true
},
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5JyeeJcwt7hykou237MqZZDA",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": false,
"integrated": true,
"payment_id": [88, 37, 149, 111, 171, 108, 120, 181],
"guaranteed": true
},
{
"address": "CjWdTpuDaZ69nTGxzm9YarR82YDYFECi1WaaREZTMy5yDsjaRX5bC3cbC3JpcrBPd7YYpjoWKuBMidgGaKBK5JyeeJfTrFAp69u2MYbf5YeN",
"network": "Mainnet",
"spend": "258dfe7eef9be934839f3b8e0d40e79035fe85879c0a9eb0d7372ae2deb0004c",
"view": "f91382373045f3cc69233254ab0406bc9e008707569ff9db4718654812d839df",
"subaddress": true,
"integrated": true,
"payment_id": [125, 69, 155, 152, 140, 160, 157, 186],
"guaranteed": true
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x712U9w7ScYA",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": false,
"integrated": false,
"guaranteed": false
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x712UA2gCrT1",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": true,
"integrated": false,
"guaranteed": false
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x71Vc1DbPKwJu81cxJjqBkS",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": false,
"integrated": true,
"payment_id": [92, 225, 118, 220, 39, 3, 72, 51],
"guaranteed": false
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x71Vc2o1rPMaXN31Fe5J6dn",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": true,
"integrated": true,
"payment_id": [20, 120, 47, 89, 72, 165, 233, 115],
"guaranteed": false
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x712UAQHCRZ4",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": false,
"integrated": false,
"guaranteed": true
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x712UAUzqaii",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": true,
"integrated": false,
"guaranteed": true
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x71VcAsfQc3gJQ2gHLd5DiQ",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": false,
"integrated": true,
"payment_id": [193, 149, 123, 214, 180, 205, 195, 91],
"guaranteed": true
},
{
"address": "Kgx5uCVsMSEVm7seL8tjyRGmmVXjWfEowKpKjgaXUGVyMViBYMh13VQ4mfqpB7zEVVcJx3E8FFgAuQ8cq6mg5x71VcDBAD5jbZQ3AMHFyvQB",
"network": "Testnet",
"spend": "bba3a8a5bb47f7abf2e2dffeaf43385e4b308fd63a9ff6707e355f3b0a6c247a",
"view": "881713a4fa9777168a54bbdcb75290d319fb92fdf1026a8a4b125a8e341de8ab",
"subaddress": true,
"integrated": true,
"payment_id": [205, 170, 65, 0, 51, 175, 251, 184],
"guaranteed": true
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV61VPJnBtTP",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": false,
"integrated": false,
"guaranteed": false
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV61VPUrwMvP",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": true,
"integrated": false,
"guaranteed": false
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV6AY5ECEhP5Nr1aCRPXdxk",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": false,
"integrated": true,
"payment_id": [173, 149, 78, 64, 215, 211, 66, 170],
"guaranteed": false
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV6AY882kTUS1D2LttnPvTR",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": true,
"integrated": true,
"payment_id": [254, 159, 186, 162, 1, 8, 156, 108],
"guaranteed": false
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV61VPpBBo8F",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": false,
"integrated": false,
"guaranteed": true
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV61VPuUJX3b",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": true,
"integrated": false,
"guaranteed": true
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV6AYCZPxVAoDu21DryMoto",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": false,
"integrated": true,
"payment_id": [3, 115, 230, 129, 172, 108, 116, 235],
"guaranteed": true
},
{
"address": "FSDinqdKK54PbjF73GgW3nUpf7bF8QbyxFCUurENmUyeEfSxSLL2hxwANBLzq1A8gTSAzzEn65hKjetA8o5BvjV6AYFYCqKQAWL18KkpBQ8R",
"network": "Stagenet",
"spend": "4cd503040f5e43871bf37d8ca7177da655bda410859af754e24e7b44437f3151",
"view": "af60d42b6c6e4437fd93eb32657a14967efa393630d7aee27b5973c8e1c5ad39",
"subaddress": true,
"integrated": true,
"payment_id": [94, 122, 63, 167, 209, 225, 14, 180],
"guaranteed": true
}
]

View File

@@ -1,432 +0,0 @@
use core::cmp::Ordering;
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use zeroize::Zeroize;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use crate::{
Protocol, hash,
serialize::*,
ring_signatures::RingSignature,
ringct::{bulletproofs::Bulletproofs, RctType, RctBase, RctPrunable, RctSignatures},
};
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Input {
Gen(u64),
ToKey { amount: Option<u64>, key_offsets: Vec<u64>, key_image: EdwardsPoint },
}
impl Input {
pub(crate) fn fee_weight(offsets_weight: usize) -> usize {
// Uses 1 byte for the input type
// Uses 1 byte for the VarInt amount due to amount being 0
1 + 1 + offsets_weight + 32
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Input::Gen(height) => {
w.write_all(&[255])?;
write_varint(height, w)
}
Input::ToKey { amount, key_offsets, key_image } => {
w.write_all(&[2])?;
write_varint(&amount.unwrap_or(0), w)?;
write_vec(write_varint, key_offsets, w)?;
write_point(key_image, w)
}
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Input> {
Ok(match read_byte(r)? {
255 => Input::Gen(read_varint(r)?),
2 => {
let amount = read_varint(r)?;
// https://github.com/monero-project/monero/
// blob/00fd416a99686f0956361d1cd0337fe56e58d4a7/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L860-L863
// A non-RCT 0-amount input can't exist because only RCT TXs can have a 0-amount output
// That's why collapsing to None if the amount is 0 is safe, even without knowing if RCT
let amount = if amount == 0 { None } else { Some(amount) };
Input::ToKey {
amount,
key_offsets: read_vec(read_varint, r)?,
key_image: read_torsion_free_point(r)?,
}
}
_ => Err(io::Error::other("Tried to deserialize unknown/unused input type"))?,
})
}
}
// Doesn't bother moving to an enum for the unused Script classes
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Output {
pub amount: Option<u64>,
pub key: CompressedEdwardsY,
pub view_tag: Option<u8>,
}
impl Output {
pub(crate) fn fee_weight(view_tags: bool) -> usize {
// Uses 1 byte for the output type
// Uses 1 byte for the VarInt amount due to amount being 0
1 + 1 + 32 + if view_tags { 1 } else { 0 }
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.amount.unwrap_or(0), w)?;
w.write_all(&[2 + u8::from(self.view_tag.is_some())])?;
w.write_all(&self.key.to_bytes())?;
if let Some(view_tag) = self.view_tag {
w.write_all(&[view_tag])?;
}
Ok(())
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(8 + 1 + 32);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(rct: bool, r: &mut R) -> io::Result<Output> {
let amount = read_varint(r)?;
let amount = if rct {
if amount != 0 {
Err(io::Error::other("RCT TX output wasn't 0"))?;
}
None
} else {
Some(amount)
};
let view_tag = match read_byte(r)? {
2 => false,
3 => true,
_ => Err(io::Error::other("Tried to deserialize unknown/unused output type"))?,
};
Ok(Output {
amount,
key: CompressedEdwardsY(read_bytes(r)?),
view_tag: if view_tag { Some(read_byte(r)?) } else { None },
})
}
}
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
pub enum Timelock {
None,
Block(usize),
Time(u64),
}
impl Timelock {
fn from_raw(raw: u64) -> Timelock {
if raw == 0 {
Timelock::None
} else if raw < 500_000_000 {
Timelock::Block(usize::try_from(raw).unwrap())
} else {
Timelock::Time(raw)
}
}
fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(
&match self {
Timelock::None => 0,
Timelock::Block(block) => (*block).try_into().unwrap(),
Timelock::Time(time) => *time,
},
w,
)
}
}
impl PartialOrd for Timelock {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
match (self, other) {
(Timelock::None, Timelock::None) => Some(Ordering::Equal),
(Timelock::None, _) => Some(Ordering::Less),
(_, Timelock::None) => Some(Ordering::Greater),
(Timelock::Block(a), Timelock::Block(b)) => a.partial_cmp(b),
(Timelock::Time(a), Timelock::Time(b)) => a.partial_cmp(b),
_ => None,
}
}
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct TransactionPrefix {
pub version: u64,
pub timelock: Timelock,
pub inputs: Vec<Input>,
pub outputs: Vec<Output>,
pub extra: Vec<u8>,
}
impl TransactionPrefix {
pub(crate) fn fee_weight(
decoy_weights: &[usize],
outputs: usize,
view_tags: bool,
extra: usize,
) -> usize {
// Assumes Timelock::None since this library won't let you create a TX with a timelock
// 1 input for every decoy weight
1 + 1 +
varint_len(decoy_weights.len()) +
decoy_weights.iter().map(|&offsets_weight| Input::fee_weight(offsets_weight)).sum::<usize>() +
varint_len(outputs) +
(outputs * Output::fee_weight(view_tags)) +
varint_len(extra) +
extra
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.version, w)?;
self.timelock.write(w)?;
write_vec(Input::write, &self.inputs, w)?;
write_vec(Output::write, &self.outputs, w)?;
write_varint(&self.extra.len(), w)?;
w.write_all(&self.extra)
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = vec![];
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<TransactionPrefix> {
let version = read_varint(r)?;
// TODO: Create an enum out of version
if (version == 0) || (version > 2) {
Err(io::Error::other("unrecognized transaction version"))?;
}
let timelock = Timelock::from_raw(read_varint(r)?);
let inputs = read_vec(|r| Input::read(r), r)?;
if inputs.is_empty() {
Err(io::Error::other("transaction had no inputs"))?;
}
let is_miner_tx = matches!(inputs[0], Input::Gen { .. });
let mut prefix = TransactionPrefix {
version,
timelock,
inputs,
outputs: read_vec(|r| Output::read((!is_miner_tx) && (version == 2), r), r)?,
extra: vec![],
};
prefix.extra = read_vec(read_byte, r)?;
Ok(prefix)
}
pub fn hash(&self) -> [u8; 32] {
hash(&self.serialize())
}
}
/// Monero transaction. For version 1, rct_signatures still contains an accurate fee value.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Transaction {
pub prefix: TransactionPrefix,
pub signatures: Vec<RingSignature>,
pub rct_signatures: RctSignatures,
}
impl Transaction {
pub(crate) fn fee_weight(
protocol: Protocol,
decoy_weights: &[usize],
outputs: usize,
extra: usize,
fee: u64,
) -> usize {
TransactionPrefix::fee_weight(decoy_weights, outputs, protocol.view_tags(), extra) +
RctSignatures::fee_weight(protocol, decoy_weights.len(), outputs, fee)
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.prefix.write(w)?;
if self.prefix.version == 1 {
for ring_sig in &self.signatures {
ring_sig.write(w)?;
}
Ok(())
} else if self.prefix.version == 2 {
self.rct_signatures.write(w)
} else {
panic!("Serializing a transaction with an unknown version");
}
}
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::with_capacity(2048);
self.write(&mut res).unwrap();
res
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Transaction> {
let prefix = TransactionPrefix::read(r)?;
let mut signatures = vec![];
let mut rct_signatures = RctSignatures {
base: RctBase { fee: 0, encrypted_amounts: vec![], pseudo_outs: vec![], commitments: vec![] },
prunable: RctPrunable::Null,
};
if prefix.version == 1 {
signatures = prefix
.inputs
.iter()
.filter_map(|input| match input {
Input::ToKey { key_offsets, .. } => Some(RingSignature::read(key_offsets.len(), r)),
_ => None,
})
.collect::<Result<_, _>>()?;
if !matches!(prefix.inputs[0], Input::Gen(..)) {
let in_amount = prefix
.inputs
.iter()
.map(|input| match input {
Input::Gen(..) => Err(io::Error::other("Input::Gen present in non-coinbase v1 TX"))?,
// v1 TXs can burn v2 outputs
// dcff3fe4f914d6b6bd4a5b800cc4cca8f2fdd1bd73352f0700d463d36812f328 is one such TX
// It includes a pre-RCT signature for a RCT output, yet if you interpret the RCT
// output as being worth 0, it passes a sum check (guaranteed since no outputs are RCT)
Input::ToKey { amount, .. } => Ok(amount.unwrap_or(0)),
})
.collect::<io::Result<Vec<_>>>()?
.into_iter()
.sum::<u64>();
let mut out = 0;
for output in &prefix.outputs {
if output.amount.is_none() {
Err(io::Error::other("v1 transaction had a 0-amount output"))?;
}
out += output.amount.unwrap();
}
if in_amount < out {
Err(io::Error::other("transaction spent more than it had as inputs"))?;
}
rct_signatures.base.fee = in_amount - out;
}
} else if prefix.version == 2 {
rct_signatures = RctSignatures::read(
prefix.inputs.first().map_or(0, |input| match input {
Input::Gen(_) => 0,
Input::ToKey { key_offsets, .. } => key_offsets.len(),
}),
prefix.inputs.len(),
prefix.outputs.len(),
r,
)?;
} else {
Err(io::Error::other("Tried to deserialize unknown version"))?;
}
Ok(Transaction { prefix, signatures, rct_signatures })
}
pub fn hash(&self) -> [u8; 32] {
let mut buf = Vec::with_capacity(2048);
if self.prefix.version == 1 {
self.write(&mut buf).unwrap();
hash(&buf)
} else {
let mut hashes = Vec::with_capacity(96);
hashes.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hashes.extend(hash(&buf));
buf.clear();
hashes.extend(&match self.rct_signatures.prunable {
RctPrunable::Null => [0; 32],
_ => {
self.rct_signatures.prunable.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
hash(&buf)
}
});
hash(&hashes)
}
}
/// Calculate the hash of this transaction as needed for signing it.
pub fn signature_hash(&self) -> [u8; 32] {
if self.prefix.version == 1 {
return self.prefix.hash();
}
let mut buf = Vec::with_capacity(2048);
let mut sig_hash = Vec::with_capacity(96);
sig_hash.extend(self.prefix.hash());
self.rct_signatures.base.write(&mut buf, self.rct_signatures.rct_type()).unwrap();
sig_hash.extend(hash(&buf));
buf.clear();
self.rct_signatures.prunable.signature_write(&mut buf).unwrap();
sig_hash.extend(hash(&buf));
hash(&sig_hash)
}
fn is_rct_bulletproof(&self) -> bool {
match &self.rct_signatures.rct_type() {
RctType::Bulletproofs | RctType::BulletproofsCompactAmount | RctType::Clsag => true,
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::BulletproofsPlus => false,
}
}
fn is_rct_bulletproof_plus(&self) -> bool {
match &self.rct_signatures.rct_type() {
RctType::BulletproofsPlus => true,
RctType::Null |
RctType::MlsagAggregate |
RctType::MlsagIndividual |
RctType::Bulletproofs |
RctType::BulletproofsCompactAmount |
RctType::Clsag => false,
}
}
/// Calculate the transaction's weight.
pub fn weight(&self) -> usize {
let blob_size = self.serialize().len();
let bp = self.is_rct_bulletproof();
let bp_plus = self.is_rct_bulletproof_plus();
if !(bp || bp_plus) {
blob_size
} else {
blob_size + Bulletproofs::calculate_bp_clawback(bp_plus, self.prefix.outputs.len()).0
}
}
}

Some files were not shown because too many files have changed in this diff Show More