1390 Commits

Author SHA1 Message Date
Luke Parker
8404844c4e Create a Snapshot within RocksDB when starting a transaction
This ensures if a TXN is reading a value, and another TXN mutates it, the TXN's
actions are atomic to the first value (and not affected by the inconsistency).

Because we can't replicate this with parity-db, I'm not sure this is worth
committing.
2024-03-31 10:20:59 -04:00
Luke Parker
93be7a3067 Latest hyper-rustls, remove async-recursion
I didn't remove async-recursion when I updated the repo to 1.77 as I forgot we
used it in the tests. I still had to add some Box::pins, which may have been a
valid option, on the prior Rust version, yet at least resolves everything now.

Also updates everything which doesn't introduce further depends.
2024-03-27 00:17:04 -04:00
noot
63521f6a96 implement Router.sol and associated functions (#92)
* start Router contract

* use calldata for function args

* var name changes

* start testing router contract

* test with and without abi.encode

* cleanup

* why tf isn't tests/utils working

* cleanup tests

* remove unused files

* wip

* fix router contract and tests, add set/update public keys funcs

* impl some Froms

* make execute non-reentrant

* cleanup

* update Router to use ReentrancyGuard

* update contract to use errors, use bitfield in Executed event, minor other fixes

* wip

* fix build issues from merge, tests ok

* Router.sol cleanup

* cleanup, uncomment stuff

* bump ethers.rs version to latest

* make contract functions take generic middleware

* update build script to assert no compiler errors

* hardcode pubkey parity into contract, update tests

* Polish coins/ethereum in various ways

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-03-24 09:00:54 -04:00
Luke Parker
3d855c75be Create group before adding to it 2024-03-24 00:18:40 -04:00
Luke Parker
07df9aa035 Ensure user is in a group 2024-03-24 00:03:32 -04:00
Luke Parker
bc44fbdbac Add TODO to coordinator P2P 2024-03-23 23:32:21 -04:00
Luke Parker
4cacce5e55 Perform key share amortization on-chain to avoid discrepancies 2024-03-23 23:32:14 -04:00
Luke Parker
7408e26781 Don't regenerate infrastructure keys
Enables running setup without invalidating the message queue
2024-03-23 23:32:04 -04:00
Luke Parker
1f92e1cbda Fixes for prior commit 2024-03-23 23:31:55 -04:00
Luke Parker
333a9571b8 Use volumes for message-queue/processors/coordinator/serai 2024-03-23 23:31:44 -04:00
Luke Parker
b7d49af1d5 Track total peer count in the coordinator 2024-03-23 18:02:48 -04:00
Luke Parker
5ea3b1bf97 Use " " instead of "" for the empty key so sh doesn't interpret it as falsy 2024-03-23 17:38:50 -04:00
Luke Parker
2a31d8552e Add empty string for the KEY to serai-client to use the default keystore 2024-03-23 16:48:12 -04:00
Luke Parker
bca3728a10 Randomly select an addr from the authority discovery 2024-03-23 00:09:23 -04:00
Luke Parker
4914420a37 Don't add as an explicit peer if already connected 2024-03-22 23:51:51 -04:00
Luke Parker
f11a08c436 Peer finding which won't get stuck on one specific network 2024-03-22 23:47:43 -04:00
Luke Parker
35b58a45bd Split peer finding into a dedicated task 2024-03-22 23:40:15 -04:00
Luke Parker
af9b1ad5f9 Initial pruning of backlogged consensus messages 2024-03-22 23:18:53 -04:00
Luke Parker
e5afcda76b Explicitly use "" for KEY within the tests
Causes the provided keystore to be used over our keystore.
2024-03-22 23:05:40 -04:00
j-berman
08c7c1b413 monero: reference updated PR in fee test comment 2024-03-22 22:29:55 -04:00
Luke Parker
bdf5a66e95 Correct Serai key provision 2024-03-22 17:11:58 -04:00
Luke Parker
e861859dec Update EpochDuration in runtime 2024-03-22 16:18:01 -04:00
Luke Parker
6658d95c85 Extend orchestration as actually needed for testnet
Contains various bug fixes.
2024-03-22 16:15:26 -04:00
Luke Parker
2f07d04d88 Extend timeout for rebroadcast of consensus messages in coordinator 2024-03-22 16:06:31 -04:00
Luke Parker
e0259f2fe5 Add TODO re: Monero 2024-03-22 16:06:04 -04:00
Luke Parker
fab7a0a7cb Use the deterministically built wasm
Has the Dockerfile output to a volume. Has the node use the wasm from the
volume, if it exists.
2024-03-22 02:19:09 -04:00
Luke Parker
84cee06ac1 Rust 1.77 2024-03-21 20:09:33 -04:00
Luke Parker
c706d8664a Use OptimisticTransactionDb
Exposes flush calls.

Adds safety, at the cost of a panic risk, as multiple TXNs simultaneously
writing to a key will now cause a panic. This should be fine and the safety is
appreciated.
2024-03-20 23:42:40 -04:00
Luke Parker
1f2b9376f9 zstd 0.13 2024-03-20 21:53:57 -04:00
Luke Parker
13b147cbf6 Reduce coordinator tests contention re: cosign messages 2024-03-20 08:23:23 -04:00
Luke Parker
4a6496a90b Add slightly nicer formatting re: Protocol Changes doc 2024-03-12 00:59:51 -04:00
Luke Parker
9662d94bf9 Document the signals pallet in the user-facing docs 2024-03-12 00:56:06 -04:00
Luke Parker
233164cefd Flesh out docs more 2024-03-11 23:51:44 -04:00
Luke Parker
442d8c02fc Add docs, correct URL 2024-03-11 20:00:01 -04:00
Luke Parker
d1be9eaa2d Change baseurl to /docs 2024-03-11 18:02:54 -04:00
Luke Parker
c32d3413ba Add just-the-docs based user-facing documentation 2024-03-11 17:55:27 -04:00
Luke Parker
a3a009a7e9 Move docs to spec 2024-03-11 17:55:05 -04:00
Luke Parker
0889627e60 Typo fix for prior commit 2024-03-11 02:20:51 -04:00
Luke Parker
ace41c79fd Tidy the BlockHasEvents cache 2024-03-11 01:44:00 -04:00
Luke Parker
f7d16b3fc5 Fix 0 - 1 which caused a panic 2024-03-09 05:37:41 -05:00
Luke Parker
157acc47ca More aggresive WAL parameters 2024-03-09 05:05:43 -05:00
Luke Parker
ae0ecf9efe Disable jemalloc for rocksdb 0.22 to fix windows builds 2024-03-09 04:26:24 -05:00
Luke Parker
6374d9987e Correct how we save the block to scan from 2024-03-09 03:48:44 -05:00
Luke Parker
c93f6bf901 Replace yield_now with sleep 100 to prevent hammering a task, despite still being over-eager 2024-03-09 03:34:31 -05:00
Luke Parker
61a81e53e1 Further optimize cosign DB 2024-03-09 03:31:06 -05:00
Luke Parker
68dc872b88 sync every txn 2024-03-09 03:18:52 -05:00
Luke Parker
89b237af7e Correct the return value of block_has_events 2024-03-09 02:44:04 -05:00
Luke Parker
2347bf5fd3 Bound cosign work and ensure it progress forward even when cosigns don't occur
Should resolve the DB load observed on testnet.
2024-03-09 02:20:23 -05:00
Luke Parker
97f433c694 Redo how WAL/logs are limited by the DB
Adds a patch to the latest rocksdb.
2024-03-09 02:20:14 -05:00
Luke Parker
10f5ec51ca Explicitly limit RocksDB logs 2024-03-08 09:19:34 -05:00
Luke Parker
454bebaa77 Have the TendermintMachine domain-separate by genesis
Enbables support for multiple machines over the same DB.
2024-03-08 01:22:02 -05:00
Luke Parker
0d569ff7a3 cargo update
Resolves the current deny warning.
2024-03-07 23:23:48 -05:00
Luke Parker
480acfd430 Fix machete 2024-03-07 23:00:17 -05:00
Luke Parker
e266bc2e32 Stop validators from equivocating on reboot
Part of https://github.com/serai-dex/serai/issues/345.

The lack of full DB persistence does mean enough nodes rebooting at the same
time may cause a halt. This will prevent slashes.
2024-03-07 22:56:35 -05:00
Luke Parker
6c8a0bfda6 Limit docker logs to 300MB per container 2024-03-06 21:49:55 -05:00
Luke Parker
06c23368f2 Mitigate https://github.com/serai-dex/serai/issues/539 by making keystore deterministic 2024-03-06 21:37:40 -05:00
Luke Parker
5629c94b8b Reconcile the two copies of scalar_vector.rs in monero-serai 2024-03-02 17:15:16 -05:00
Luke Parker
b427f4b8ab Revert "Add bootnodes"
This reverts commit 1096ddb7ea.

This commit was intended for the testnet branch alone.
2024-02-26 10:47:47 -05:00
Luke Parker
1096ddb7ea Add bootnodes 2024-02-26 10:46:16 -05:00
Luke Parker
5487844b9e clippy && cargo update 2024-02-25 18:37:15 -05:00
akildemir
627e7e6210 Add validator set rotation test for the node side (#532)
* add node side unit test

* complete rotation test for all networks

* set up the fast-epoch docker file

* fix pr comments
2024-02-24 14:51:06 -05:00
Luke Parker
019b42c0e0 fmt/clippy fixes 2024-02-19 22:33:56 -05:00
Justin Berman
079fddbaa6 monero: only mask user features on new polyseed, not on decode (#503)
* monero: only mask user features on new polyseed, not on decode

- This commit ensures a polyseed string that has unsupported features correctly errors on decode (rather than panic in debug build or return an incorrect successful response in prod build)
- Also avoids panicking when checksum calculation is unexpectedly wrong

Polyseed reference impl for feature masking:
- polyseed_create: b7c35bb3c6/src/polyseed.c (L61)
- polyseed_decode: b7c35bb3c6/src/polyseed.c (L212)

* PR comments

* Make from_internal a member of Polyseed

* Add accidentally removed newline

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-19 22:03:02 -05:00
Justin Berman
92d8b91be9 Monero: fix decoy selection algo and add test for latest spendable (#384)
* Monero: fix decoy selection algo and add test for latest spendable

- DSA only selected coinbase outputs and didn't match the wallet2
implementation
- Added test to make sure DSA will select a decoy output from the
most recent unlocked block
- Made usage of "height" in DSA consistent with other usage of
"height" in Monero code (height == num blocks in chain)
- Rely on monerod RPC response for output's unlocked status

* xmr runner tests mine until outputs are unlocked

* fingerprintable canoncial select decoys

* Separate fingerprintable canonical function

Makes it simpler for callers who are unconcered with consistent
canonical output selection across multiple clients to rely on
the simpler Decoy::select and not worry about fingerprintable
canonical

* fix merge conflicts

* Put back TODO for issue #104

* Fix incorrect check on distribution len

The RingCT distribution on mainnet doesn't start until well after
genesis, so the distribution length is expected to be < height.

To be clear, this was my mistake from this series of changes
to the DSA. I noticed this mistake because the DSA would error
when running on mainnet.
2024-02-19 21:34:10 -05:00
Justin Berman
4f1f7984a6 monero: added tx extra variants padding and mysterious minergate (#510)
* monero: read/write tx extra padding

* monero: read/write tx extra mysterious minergate variant

* Clippy

* monero: add tx extra test for minergate + pub key

* BufRead

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-19 21:22:00 -05:00
Justin Berman
cda14ac8b9 monero: Use fee priority enums from monero repo CLI/RPC wallets (#499)
* monero: Use fee priority enums from monero repo CLI/RPC wallets

* Update processor for fee priority change

* Remove FeePriority::Default

Done in consultation with @j-berman.

The RPC/CLI/GUI almost always adjust up except barring very explicit commands,
hence why FeePriority 0 is now only exposed via the explicit command of
FeePriority::Custom { priority: 0 }.

Also helps with terminology.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-19 21:03:27 -05:00
Luke Parker
6f5d794f10 Median by Position (#533)
* use median price instead of the highest sustained

* add test for lexicographically reversing a byte slice

* fix pr comments

* fix CI fail

* fix dex tests

* Use a fuzz-tested list of prices

* Working median algorithm based on position + lints

---------

Co-authored-by: akildemir <aeg_asd@hotmail.com>
2024-02-19 20:50:04 -05:00
j-berman
34b93b882c monero: scan all tx pub keys (not additional) for every tx
wallet2's behavior is explained more fully here:
https://github.com/UkoeHB/monero/issues/27
2024-02-19 20:48:37 -05:00
Justin Berman
0880453f82 monero: make dummy payment ID zeroes when it's included in a tx (#514)
* monero: make dummy payment ID zeroes when it's included in a tx

Also did some minor cleaning of InternalPayment::Change

* Lint

* Clarify comment
2024-02-19 20:45:50 -05:00
Justin Berman
ebdfc9afb4 monero: test xmr send that requires additional pub keys (#516)
* Test xmr send that requires additional pub keys

* Clippy
2024-02-19 20:18:31 -05:00
Luke Parker
f6409d08f3 Increase timeout in coordinator tests 2024-02-18 08:19:07 -05:00
Luke Parker
c41a8ac8f2 Revert "rocksdb 0.22 via a patch"
This reverts commit c05c511938.

rocksdb 0.22 does not work on Windows at this time.
2024-02-18 08:17:26 -05:00
akildemir
d88aa90ec2 support input encoded data for bitcoin network (#486)
* add input script check

* add test

* optimizations

* bug fix

* fix pr comments

* Test SegWit-encoded data using a single output (not two)

* Remove TODO used as a question, document origins when SegWit encoding

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-18 07:43:44 -05:00
Luke Parker
c05c511938 rocksdb 0.22 via a patch
patch is removeable once https://github.com/paritytech/parity-common/pull/828
is merged and released.
2024-02-18 05:32:04 -05:00
Justin Berman
df85c09435 monero: match monero's stricter check when decompressing points (#515)
* monero: match monero's stricter check when decompressing points

* Reverted type change for output key
2024-02-17 23:16:16 -05:00
Luke Parker
62a619a312 Have monerod be chown'd to monero:nogroup
On some Docker setups, the monero user doesn't have a monero group for some
reason. This handles that edge case.
2024-02-10 20:58:04 -05:00
Luke Parker
95b7460907 Use Debian instead of Alpine for monero on testnet 2024-02-10 20:57:55 -05:00
Luke Parker
95c3cfc52e Add restart policy to Docker containers 2024-02-09 08:43:33 -05:00
Luke Parker
f0694172ef Fix potential generation of invalid SignData in shim 2024-02-09 02:52:08 -05:00
Luke Parker
29633ada1b Rust 1.76 2024-02-09 02:51:24 -05:00
Luke Parker
337e54c672 Redo Dockerfile generation (#530)
Moves from concatted Dockerfiles to pseudo-templated Dockerfiles via a dedicated Rust program.

Removes the unmaintained kubernetes, not because we shouldn't have/use it, but because it's unmaintained and needs to be reworked before it's present again.

Replaces the compose with the work in the new orchestrator binary which spawns everything as expected. While this arguably re-invents the wheel, it correctly manages secrets and handles the variadic Dockerfiles.

Also adds an unrelated patch for zstd and simplifies running services a bit by greater utilizing the existing infrastructure.

---

* Delete all Dockerfile fragments, add new orchestator to generate Dockerfiles

Enables greater templating.

Also delete the unmaintained kubernetes folder *for now*. This should be
restored in the future.

* Use Dockerfiles from the orchestator

* Ignore Dockerfiles in the git repo

* Remove CI job to check Dockerfiles are as expected now that they're no longer committed

* Remove old Dockerfiles from repo

* Use Debian for monero-wallet-rpc

* Remove replace_cmds for proper usage of entry-dev

Consolidates ports a bit.

Updates serai-docker-tests from "compose" to "build".

* Only write a new dockerfile if it's distinct

Preserves the updated time metadata.

* Update serai-docker-tests

* Correct the path Dockerfiles are built from

* Correct inclusion of orchestration folder in Docker builds

* Correct debug/release flagging in the cargo command

Apparently, --debug isn't an effective NOP yet an error.

* Correct path used to run the Serai node within a Dockerfile

* Correct path in Monero Dockerfile

* Attempt storing monerod in /usr/bin

* Use sudo to move into /usr/bin in CI

* Correct 18.3.0 to 18.3.1

* Escape * with quotes

* Update deny.toml, ADD orchestration in runtime Dockerfile

* Add --detach to the Monero GH CI

* Diversify dockerfiles by network

* Fixes to network-diversified orchestration

* Bitcoin and Monero testnet scripts

* Permissions and tweaks

* Flatten scripts folders

* Add missing folder specification to Monero Dockerfile

* Have monero-wallet-rpc specify the monerod login

* Have the Docker CMD specify env variables inserted at time of Dockerfile generation

They're overrideable with the global enviornment as for tests. This enables
variable generation in orchestrator and output to productionized Docker files
without creating a life-long file within the Docker container.

* Don't add Dockerfiles into Docker containers now that they have secrets

Solely add the source code for them as needed to satisfy the workspace bounds.

* Download arm64 Monero on arm64

* Ensure constant host architecture when reproducibly building the wasm

Host architecture, for some reason, can effect the generated code despite the
target architecture always being foreign to the host architecture.

* Randomly generate infrastructure keys

* Have orchestrator generate a key, be able to create/start containers

* Ensure bash is used over sh

* Clean dated docs

* Change how quoting occurs

* Standardize to sh

* Have Docker test build the dev Dockerfiles

* Only key_gen once

* cargo update

Adds a patch for zstd and reconciles the breaking nightly change which just
occurred.

* Use a dedicated network for Serai

Also fixes SERAI_HOSTNAME passed to coordinator.

* Support providing a key over the env for the Serai node

* Enable and document running daemons for tests via serai-orchestrator

Has running containers under the dev network port forward the RPC ports.

* Use volumes for bitcoin/monero

* Use bitcoin's run.sh in GH CI

* Only use the volume for testnet (not dev)
2024-02-09 02:48:44 -05:00
akildemir
347d4cf413 Fix tendermint distinct precommit bug (#517)
* fix tendermint distinct precommit bug

* remove conflicting precommit error
2024-02-08 13:47:37 -05:00
Luke Parker
aaff74575f Remove unused brew packages on macOS (#531)
* Remove unused brew packages on macOS

* Remove reference to Docker in macOS CI

* Remove gems, explicitly test Intel and m1 macOS

* Allow gem to error since it still mostly runs
2024-02-05 23:53:57 -05:00
akildemir
ad0ecc5185 complete various todos in tributary (#520)
* complete various todos

* fix pr comments

* Document bounds on unique hashes in TransactionKind

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2024-02-05 03:50:55 -05:00
Luke Parker
af12cec3b9 cargo update
Resolves deny warning around unintended behavior change (without semver bump).
2024-02-04 03:50:48 -05:00
Luke Parker
89788be034 macOS clippy (#526)
* Specifically use bash as a shell to try and get rustup to work on Windows

* Use bash for the call to echo

* Add macOS clippy

* Debug why git diff failed

* Restore macos-latest to matrix

* Allow whitespace before the fact 0 lines were modified

* Add LC_ALL env variable to grep

* Replace usage of -P with -e
2024-02-01 21:31:02 -05:00
GitHub Actions
745075af6e Update nightly 2024-02-01 21:03:27 -05:00
Luke Parker
9b25d0dad7 Update to node 20 GitHub cache action 2024-02-01 20:52:17 -05:00
Luke Parker
2b76e41c9a Directly install protobuf-compiler without using an external action (#524)
* Directly install protobuf-compiler without using an external action

* Remove unused "github-token" input
2024-01-31 19:21:26 -05:00
Luke Parker
05219c3ce8 Windows Clippy (#525)
* Add windows clippy

* Adjust build-dependencies for Linux/Windows

* Specifically use bash as a shell to try and get rustup to work on Windows

* Use bash for the call to echo
2024-01-31 19:10:39 -05:00
Luke Parker
cc75b52a43 Don't allow constructing unusable serai_client::bitcoin::Address es 2024-01-31 17:54:43 -05:00
Luke Parker
4913873b10 Slash reports (#523)
* report_slashes plumbing in Substrate

Notably delays the SetRetired event until it provides a slash report or the set
after it becomes the set to report its slashes.

* Add dedicated AcceptedHandover event

* Add SlashReport TX to Tributary

* Create SlashReport TXs

* Handle SlashReport TXs

* Add logic to generate a SlashReport to the coordinator

* Route SlashReportSigner into the processor

* Finish routing the SlashReport signing/TX publication

* Add serai feature to processor's serai-client
2024-01-29 03:48:53 -05:00
rlking
0b8c7ade6e Add scripts to create monero wallet rpc container (#521)
* create Dockerfile for monero wallet rpc with dockerfiles.sh

* make monero wallet rpc docker accessible from outside

* connect wallet-rpc with monerod

* add generated Dockerfile for monero wallet rpc

* add monero wallet rpcs to docker profiles

* update getting started guide to refer to wallet rpc docker
2024-01-28 20:58:23 -05:00
Luke Parker
21262d41e6 Resolve latest clippy and a couple no longer needed fmt notes 2024-01-22 22:13:37 -05:00
Luke Parker
508f7eb23a cargo update
Pseudo-resolves shlex advisory (due to the deprecation of the vulnerable
functions, which hopefully should prevent their use). shlex is only used by
bindgen, a sufficiently trusted dependency.
2024-01-22 22:08:37 -05:00
Luke Parker
90df391170 cargo update
Resolves h2 disclosure (which shouldn't have affected us).
2024-01-19 11:44:49 -05:00
Luke Parker
9d3d47fc9f hyper-rustls 0.26 (#519)
* hyper-rustls 0.25

This isn't worth it until our dependencies update to rustls 0.22 as well.

* hyper-rustls 0.26 and hyper 1.0
2024-01-16 19:32:30 -05:00
akildemir
6691f16292 remove mach patch 2024-01-16 12:06:50 -05:00
Luke Parker
9c06cbccad Document immunity to https://github.com/paritytech/polkadot-sdk/issues/2947 now that I have permission to disclose it 2024-01-16 12:06:08 -05:00
Justin Berman
c507ab9fd6 monero: match varint decoding (#513)
* monero: match varint decoding

* Fix build and clippy
2024-01-11 03:15:11 -05:00
Luke Parker
3aa8007700 Add missing unwap to processor's test fn 2024-01-06 01:01:19 -05:00
Luke Parker
1ba2d8d832 Make monero-serai Block::number not panic on invalid blocks 2024-01-06 00:03:14 -05:00
Boog900
e7b0ed3e7e Check miner tx has a miner input when deserializing. 2024-01-05 23:49:43 -05:00
Luke Parker
f3429ec1ef Inside publish (for a Serai transaction from the coordinator), use RetiredDb over latest session
Not only is this more performant, the definition of retired won't be if a newer
session is active. It will be if the session has posted a slash report or the
stake for that session has unlocked.

Initial commit towards implementing SlashReports.
2024-01-05 23:40:15 -05:00
Luke Parker
1cff9b4264 Patch proc-macro-crate 2 to proc-macro-crate 3
Updates toml_edit to 0.21.
2024-01-05 23:40:15 -05:00
j-berman
3c5a82e915 monero: investigated TODO and can remove it
The behavior appears to match monero core. monero core isn't
throwing an exception in the linked code, it's returning
boost::none (and logging an error) which is the same functional
behavior as finding that the output does not belong to the user.
2024-01-05 12:18:10 -05:00
Boog900
93e85c5ce6 Monero: use only the first input ring length for RCT deserialization. (#504)
* Use only the first input ring length for all RCT input signatures.

This is what Monero does:
ac02af9286/src/ringct/rctTypes.h (L422)

https://github.com/monero-project/monero/blob/master/src/cryptonote_basic/cryptonote_basic.h#L308-L309

This isn't an issue for current transactions as from hf 12 Monero requires
all inputs to have the same number of decoys but for transactions before
that Monero would reject RCT txs with differing ring lengths. Monero would
deserialize each inputs signature using the ring length of the first so the
signatures for inputs other than the first would have a different
(wrong) number of elements for that input meaning the signature is invalid.

But as we are using the ring length of each input, which arguably is the
*correct* way, we would approve of transactions with inputs differing in
ring lengths.

* Check that there is more than one ring member for MLSAG signatures.

ac02af9286/src/ringct/rctSigs.cpp (L462)
2024-01-05 00:02:16 -05:00
Luke Parker
617ec604ee cargo update
Resolves the deny CI failure.
2024-01-04 01:46:26 -05:00
Justin Berman
265261d3ba monero: require seed lang when decoding seed (#502)
* monero: require seed lang when decoding seed

- Require the seed language when decoding a Classic|Polyseed seed string
	- As per https://github.com/monero-project/monero/issues/9089 and https://github.com/tevador/polyseed/issues/11
	- Fixes #478
	- Implementation note: I reused the `SeedType` enum and required it as a param to `Seed::from_string` because it seemed simplest, but perhaps there is a cleaner way to require the seed lang.
- Made sure the print statements from #487 print the seed as early as possible to help debug future issues
- A future PR could support deducing which languages a seed decodes to in order to support the UX @kayabaNerve suggested in https://github.com/monero-project/monero/issues/9089:
	- "Wallets can also try to abstract [language specification], by decoding with all languages, and only asking the user if/when multiple valid options show up ("Is this seed Spanish or Italian?")."

* Lint
2024-01-04 01:32:42 -05:00
Luke Parker
7eb388e546 PR to track down CI failures (#501)
* Use an extended timeout for DKGs specifically

* Add a log statement when message-queue connection fails

* Add a 60 second keep-alive to connections

* Use zalloc for processor/message-queue/coordinator

An additional layer which protects us against edge cases with Zeroizing
(objects which don't support it or don't miss it).

* Add further logs to message-queue

* Further increase re-attempt timeouts in CI

* Remove misplaced continue inmessage-queue client

Fixes observed CI failures.

* Revert "Further increase re-attempt timeouts in CI"

This reverts commit 3723530cf6.
2024-01-04 01:08:13 -05:00
Luke Parker
6c8040f723 Restore release for serai-node to obtain sane bootup times 2023-12-30 23:59:00 -05:00
Luke Parker
02776c54a8 Increase reattempt delays in the GH CI, which is extremely latent 2023-12-30 22:11:04 -05:00
Luke Parker
ec8dfd4639 Correct SignData serialization test from creating 256 signers of data
This overflows the u8 allowed and caused a CI failure. The actual
code/assumption is fine.
2023-12-30 19:08:29 -05:00
Luke Parker
99e05e4e5e Add patches folder to runtime Dockerfile 2023-12-30 18:36:43 -05:00
Luke Parker
a72b547824 Add patches folder to Dockerfiles 2023-12-30 13:49:41 -05:00
Luke Parker
bad3d210ba rust 1.75 2023-12-30 03:26:32 -05:00
Luke Parker
8c676d98c5 Tweaks from cargo update and patches 2023-12-30 03:26:11 -05:00
Luke Parker
890b70212a Patch matches, mach 2023-12-30 02:52:05 -05:00
Luke Parker
9f7140c3db Patch is-terminal to the std-included IsTerminal 2023-12-30 02:48:26 -05:00
Luke Parker
8b26a85faa Add patches for directories-next/option-ext
The rational is detailed in the root Cargo.toml.

While I don't personally mind MPL dependencies, even if I don't prefer them
(they're allowed in the deny.toml for a reason), I do mind the pointless scope
creep and wish to highlight how little it actually used from the crate by
re-defining it as the single function.

We could also fork directories-next, or directories, and remove the usage of
option-ext per https://github.com/dirs-dev/dirs-sys-rs/issues/24, yet that'd be
a much larger task than what was done here.

In the future, it may be beneficial to submit a PR to wasmtime replacing
directories-next with home, a cargo-team maintained library to get the home
directory and associated folders. An example migration can be found at
https://github.com/harryfei/which-rs/pull/80.
2023-12-30 02:44:33 -05:00
Luke Parker
24ea65eae9 cargo update 2023-12-30 02:36:51 -05:00
Luke Parker
fff8dcb827 Document usage of latest_decided in AuthorityDiscoveryApi 2023-12-23 21:28:50 -05:00
Luke Parker
2b23252b4c Add derive feature to Zeroize in crypto/ciphersuite
It was missing.
2023-12-23 02:13:32 -05:00
Luke Parker
b493e3e31f Validator DHT (#494)
* Route validators for any active set through sc-authority-discovery

Additionally adds an RPC route to retrieve their P2P addresses.

* Have the coordinator get peers from substrate

* Have the RPC return one address, not up to 3

Prevents the coordinator from believing it has 3 peers when it has one.

* Add missing feature to serai-client

* Correct network argument in serai-client for p2p_validators call

* Add a test in serai-client to check DHT population with a much quicker failure than the coordinator tests

* Update to latest Substrate

Removes distinguishing BABE/AuthorityDiscovery keys which causes
sc_authority_discovery to populate as desired.

* Update to a properly tagged substrate commit

* Add all dialed to peers to GossipSub

* cargo fmt

* Reduce common code in serai-coordinator-tests with amore involved new_test

* Use a recursive async function to spawn `n` DockerTests with the necessary networking configuration

* Merge UNIQUE_ID and ONE_AT_A_TIME

* Tidy up the new recursive code in tests/coordinator

* Use a Mutex in CONTEXT to let it be set multiple times

* Make complimentary edits to full-stack tests

* Augment coordinator P2p connection logs

* Drop lock acquisitions before recursing

* Better scope lock acquisitions in full-stack, preventing a deadlock

* Ensure OUTER_OPS is reset across the test boundary

* Add cargo deny allowance for dockertest fork
2023-12-22 21:09:18 -05:00
Luke Parker
00774c29d7 Replace remaining direct uses of futures with futures_util
Slight downscope which helps combat the antipattern which is the futures glob
crate. While futures_util is still a large crate, it has better defaults and
is smaller by virtue of not pulling the executor.
2023-12-18 19:45:08 -05:00
Luke Parker
a4c82632fb Use pub(crate) for create_db items, not pub 2023-12-18 17:15:02 -05:00
Luke Parker
c8747e23c5 Remove offline participants from future DKG protocols so long as the threshold is met
Makes RemoveParticipantDueToDkg a voted-on event instead of a Provided.
This removes the requirement for offline parties to be able to fully validate
blame, yet unfortunately lets an dishonest supermajority have an honest node
label any arbitrary node as dishonest.

Corrects a variety of `.i(...)` calls which panicked when they shouldn't have.

Cleans up a couple no-longer-used storage values.
2023-12-18 17:14:51 -05:00
Luke Parker
008da698bc Correct the nightly version, again
Turns out 12-04 has 12-02 in its `--version`, and accordingly 12-04 was the
intended pin.
2023-12-17 05:22:06 -05:00
Luke Parker
c2fffb9887 Correct a couple years of accumulated typos 2023-12-17 02:06:51 -05:00
Luke Parker
9c3329abeb Bump nightly version by a single day to resolve a clippy error 2023-12-17 01:51:52 -05:00
Luke Parker
065d314e2a Further expand clippy workspace lints
Achieves a notable amount of reduced async and clones.
2023-12-17 00:04:49 -05:00
Luke Parker
ea3af28139 Add workspace lints 2023-12-17 00:04:47 -05:00
akildemir
c40ce00955 Slash bad validators (#468)
* implement general design

* add slashing

* bug fixes

* fix pr comments

* misc fixes

* fix grandpa abi call type

* Correct rebase artifacts I introduced

* Cleanups and corrections

1) Uses vec![] for the OpaqueKeyProof as there's no value to passing it around
2) Remove usage of Babe/Grandpa Offences for tracking if an offence is known
   for checking if can slash. If can slash, no prior offence must have been
   known.
3) Rename DisabledIndices to SeraiDisabledIndices, drop historical data for
   current session only.
4) Doesn't remove from the pre-declared upcoming Serai set upon slash due to
   breaking light clients.
5) Into/From instead of AsRef for KeyOwnerProofSystem's generic to ensure
   safety of the conversion.

* Correct deduction from TotalAllocatedStake on slash

It should only be done if in set and only with allocations contributing to
TotalAllocatedStake (Allocation + latest session's PendingDeallocation).

* Changes meant for prior commit

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-16 17:44:08 -05:00
Luke Parker
74a68c6f68 cargo update
Resolves vulnerably in zerocopy.
2023-12-15 00:54:39 -05:00
Luke Parker
2532423d42 Remove the RemoveParticipant protocol for having new DKGs specify the participants which were removed
Obvious code cleanup is obvious.
2023-12-14 23:51:57 -05:00
Luke Parker
b60e3c2524 Replace PSTTrait and PstTxType with PublishSeraiTransaction 2023-12-14 16:06:08 -05:00
Luke Parker
77edd00725 Handle the combination of DKG removals with re-attempts
With a DKG removal comes a reduction in the amount of participants which was
ignored by re-attempts.

Now, we determine n/i based on the parties removed, and deterministically
obtain the context of who was removd.
2023-12-13 14:03:07 -05:00
hinto.janai
884b6a6fec seed: print seed info in tests 2023-12-13 10:18:34 -05:00
Luke Parker
6a172825aa Reattempts (#483)
* Schedule re-attempts and add a (not filled out) match statement to actually execute them

A comment explains the methodology. To copy it here:

"""
This is because we *always* re-attempt any protocol which had participation. That doesn't
mean we *should* re-attempt this protocol.

The alternatives were:
1) Note on-chain we completed a protocol, halting re-attempts upon 34%.
2) Vote on-chain to re-attempt a protocol.

This schema doesn't have any additional messages upon the success case (whereas
alternative #1 does) and doesn't have overhead (as alternative #2 does, sending votes and
then preprocesses. This only sends preprocesses).
"""

Any signing protocol which reaches sufficient participation will be
re-attempted until it no longer does.

* Have the Substrate scanner track DKG removals/completions for the Tributary code

* Don't keep trying to publish a participant removal if we've already set keys

* Pad out the re-attempt match a bit more

* Have CosignEvaluator reload from the DB

* Correctly schedule cosign re-attempts

* Actuall spawn new DKG removal attempts

* Use u32 for Batch ID in SubstrateSignableId, finish Batch re-attempt routing

The batch ID was an opaque [u8; 5] which also included the network, yet that's
redundant and unhelpful.

* Clarify a pair of TODOs in the coordinator

* Remove old TODO

* Final comment cleanup

* Correct usage of TARGET_BLOCK_TIME in reattempt scheduler

It's in ms and I assumed it was in s.

* Have coordinator tests drop BatchReattempts which aren't relevant yet may exist

* Bug fix and pointless oddity removal

We scheduled a re-attempt upon receiving 2/3rds of preprocesses and upon
receiving 2/3rds of shares, so any signing protocol could cause two re-attempts
(not one more).

The coordinator tests randomly generated the Batch ID since it was prior an
opaque byte array. While that didn't break the test, it was pointless and did
make the already-succeeded check before re-attempting impossible to hit.

* Add log statements, correct dead-lock in coordinator tests

* Increase pessimistic timeout on recv_message to compensate for tighter best-case timeouts

* Further bump timeout by a minute

AFAICT, GH failed by just a few seconds.

This also is worst-case in a single instance, making it fine to be decently long.

* Further further bump timeout due to lack of distinct error
2023-12-12 12:28:53 -05:00
Luke Parker
b297b79f07 Bitcoin 26.0
Also uses `uname -m` to decide what platform to download the binary for.
2023-12-12 09:56:30 -05:00
Luke Parker
3cf46338ee Have Bitcoin's send_raw_transaction considered succeeded if already sent 2023-12-12 01:05:44 -05:00
Luke Parker
2d1443eb8a Add machete CI job now that machete allows whitelisting false positives 2023-12-11 23:06:44 -05:00
Luke Parker
4de4c186b1 Remove redundant fields from dex-pallet, add cargo machete ignores 2023-12-11 07:47:23 -05:00
Luke Parker
11fdb6da1d Coordinator Cleanup (#481)
* Move logic for evaluating if a cosign should occur to its own file

Cleans it up and makes it more robust.

* Have expected_next_batch return an error instead of retrying

While convenient to offer an error-free implementation, it potentially caused
very long lived lock acquisitions in handle_processor_message.

* Unify and clean DkgConfirmer and DkgRemoval

Does so via adding a new file for the common code, SigningProtocol.

Modifies from_cache to return the preprocess with the machine, as there's no
reason not to. Also removes an unused Result around the type.

Clarifies the security around deterministic nonces, removing them for
saved-to-disk cached preprocesses. The cached preprocesses are encrypted as the
DB is not a proper secret store.

Moves arguments always present in the protocol from function arguments into the
struct itself.

Removes the horribly ugly code in DkgRemoval, fixing multiple issues present
with it which would cause it to fail on use.

* Set SeraiBlockNumber in cosign.rs as it's used by the cosigning protocol

* Remove unnecessary Clone from lambdas in coordinator

* Remove the EventDb from Tributary scanner

We used per-Transaction DB TXNs so on error, we don't have to rescan the entire
block yet only the rest of it. We prevented scanning multiple transactions by
tracking which we already had.

This is over-engineered and not worth it.

* Implement borsh for HasEvents, removing the manual encoding

* Merge DkgConfirmer and DkgRemoval into signing_protocol.rs

Fixes a bug in DkgConfirmer which would cause it to improperly handle indexes
if any validator had multiple key shares.

* Strictly type DataSpecification's Label

* Correct threshold_i_map_to_keys_and_musig_i_map

It didn't include the participant's own index and accordingly was offset.

* Create TributaryBlockHandler

This struct contains all variables prior passed to handle_block and stops them
from being passed around again and again.

This also ensures fatal_slash is only called while handling a block, as needed
as it expects to operate under perfect consensus.

* Inline accumulate, store confirmation nonces with shares

Inlining accumulate makes sense due to the amount of data accumulate needed to
be passed.

Storing confirmation nonces with shares ensures that both are available or
neither. Prior, one could be yet the other may not have been (requiring an
assert in runtime to ensure we didn't bungle it somehow).

* Create helper functions for handling DkgRemoval/SubstrateSign/Sign Tributary TXs

* Move Label into SignData

All of our transactions which use SignData end up with the same common usage
pattern for Label, justifying this.

Removes 3 transactions, explicitly de-duplicating their handlers.

* Remove CurrentlyCompletingKeyPair for the non-contextual DkgKeyPair

* Remove the manual read/write for TributarySpec for borsh

This struct doesn't have any optimizations booned by the manual impl. Using
borsh reduces our scope.

* Use temporary variables to further minimize LoC in tributary handler

* Remove usage of tuples for non-trivial Tributary transactions

* Remove serde from dkg

serde could be used to deserialize intenrally inconsistent objects which could
lead to panics or faults.

The BorshDeserialize derives have been replaced with a manual implementation
which won't produce inconsistent objects.

* Abstract Future generics using new trait definitions in coordinator

* Move published_signed_transaction to tributary/mod.rs to reduce the size of main.rs

* Split coordinator/src/tributary/mod.rs into spec.rs and transaction.rs
2023-12-10 20:21:44 -05:00
Luke Parker
6caf45ea1d Downscope usage of futures 2023-12-10 19:32:52 -05:00
hinto.janai
32bea92742 message-queue: remove (*) 2023-12-08 10:47:42 -05:00
Luke Parker
7122e0faf4 Cache the block's events within TemporalSerai
Event retrieval was prior:
- Retrieve all events in the block, which may be hundreds of KB
- Filter to just a few

Since it's frequent to want multiple sets of events, each filtered in their own
way, this caused the retrieval to happen multiple times. Now, it only will
happen once.

Also has the scoped clients take a reference, not an owned TemporalSerai.
2023-12-08 10:46:10 -05:00
Justin Berman
397fca748f monero-serai: make it clear that not providing a change address is fingerprintable (#472)
* Make it clear not providing a change address is fingerprintable

When no change address is provided, all change is shunted to the
fee. This PR makes it clear to the caller that it is fingerprintable
when the caller does this.

* Review comments
2023-12-08 07:42:02 -05:00
David Bell
16b22dd105 Convert coordinator/substrate/db to use create_db macro (#436)
* chore: implement create_db for substrate (fix broken branch)

* Correct rebase artifacts

* chore: remove todo statement

* chore: rename BlockDb to NextBlock

* chore: return empty tuple instead of empty array for event storage

* Finish rebasing

* .Minor tweaks to remove leftover variables

These may be rebase artifacts.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-08 05:12:16 -05:00
hinto.janai
a6947d6d21 bulletproofs: avoid mut 2023-12-08 04:30:22 -05:00
Luke Parker
5c047ebe74 Log the reason for yielding BlockError::Fatal to Tendermint from the Tributary 2023-12-07 09:30:25 -05:00
econsta
91a024e119 coordinator/src/db.rs db macro implimentation (#431)
* coordinator/src/db.rs db macro implimentation

* fixed fmt errors

* converted txn functions to get/set counterparts

* use take_signed_transaction function

* fix for two fo the tests

* Misc tweaks

* Minor tweaks

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-07 09:30:11 -05:00
Luke Parker
c511a54d18 Move serai-client off serai-runtime, MIT licensing it
Uses a full-fledged serai-abi to do so.

Removes use of UncheckedExtrinsic as a pointlessly (for us) length-prefixed
block with a more complicated signing algorithm than advantageous.

In the future, we should considering consolidating the various primitives
crates. I'm not convinced we benefit from one primitives crate per pallet.
2023-12-07 02:30:09 -05:00
Luke Parker
6416e0079b Add ABI crate
Call and Event are both from the pallets, which are AGPL licensed. Accordingly,
they make serai-client AGPL licensed when serai-client must end up MIT
licensed. This creates a MIT-licensed variant of Calls and Events such that
they can be used by serai-client, enabling transitioning it to MIT.

Relevant to https://github.com/serai-dex/serai/issues/337.
2023-12-06 09:56:43 -05:00
Luke Parker
7768ea90ad signals-primitives, plus various minor tweaks 2023-12-06 09:53:06 -05:00
Luke Parker
6e15bb2434 Correct the LICENSE files in serai-primitives and validator-sets-primitives
While properly tagged with the MIT license in Cargo.toml, they had AGPL files.
2023-12-06 07:10:36 -05:00
Luke Parker
d1d5ee6b3d Make InSet a double map
Reduces amount of code, allows removing the custom iter for PrefixIterator.
2023-12-06 05:39:00 -05:00
Luke Parker
82f7342372 cargo update
Updates off a yanked version of zerocopy, fixing the failing deny CI.

Bites the bullet on windows-sys 0.52. While I was hoping to update everything
at once, unfortunately tokio won't update until March (see
https://github.com/tokio-rs/mio/pull/1725). I don't want to withold these
updates for that long.
2023-12-06 05:01:08 -05:00
Luke Parker
3a6c7ad796 Use TX IDs for Bitcoin Eventualities
They're a bit more binding, smaller, provided by the Rust bitcoin library,
sane, and we don't have to worry about malleability since all of our inputs are
SegWit.
2023-12-06 04:37:11 -05:00
Luke Parker
62fa31de07 If the pool has yet to start, insert a price of 0
The test failures were caused by not inserting any price, causing the first
price to immediately become the oraclized price. While that's not inherently
invalid, suggesting the tests should've been the ones updated, it opens an
exploit where whoever first adds liquidity has the opportunity to set a
ridiculous price and DoS the set. Not oraclizing until we have an entire
period, achieved by inserting 0s during the initial blocks, ensures an open
launch for such discovery.
2023-12-05 12:29:36 -05:00
Luke Parker
095ac50ba7 Correct div by 0 I introduced 2023-12-05 10:35:27 -05:00
Luke Parker
8cc0adf281 Don't allow immediate deallocations for active validators even if the key shares remain the same
There's an exploit where the prior set improperly mints coins, the new set
occurs (resetting the oracle), and they immediately deallocate 49.9% of their
coins (which is more than enough to achieve profitability).

Now, anyone in set must wait until after the next set completes to perform any
deallocation, enabling time to halt upon improper mints.
2023-12-05 09:36:41 -05:00
Luke Parker
91905284bf Have the CI check the lockfile isn't stale
Prevents a commit review from passing, yet then the next commit 'adding' 100
new dependencies.
2023-12-05 09:13:48 -05:00
akildemir
4ebfae0b63 Ensure economic security on validator sets (#459)
* add price oracle

* tidy up

* add todo

* bug fixes

* fix pr comments

* Use spot price, tweak some formulas

Also cleans nits.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-12-05 08:52:50 -05:00
Luke Parker
746bf5c6ad Rebuild Dockerfiles 2023-12-05 04:51:06 -05:00
Luke Parker
6e9ce3ac4f Pin mimalloc to the commit hash for 2.1.2 2023-12-05 03:29:13 -05:00
Luke Parker
797ed49e7b DKG Removals (#467)
* Update ValidatorSets with a remove_participant call

* Add DkgRemoval, a sign machine for producing the relevant MuSig signatures

* Don't use position-dependent u8s yet Public when removing validators from the DKG

* Add DkgRemovalPreprocess, DkgRemovalShares

Implementation is via a new publish_tributary_tx lambda.

This is code is a copy-pasted mess which will need to be cleaned up.

* Only allow non-removed validators to vote for removals

Otherwise, it's risked that the remaining validators fall below 67% of the
original set.

* Correct publish_serai_tx, which was prior publish_set_keys in practice
2023-12-04 07:04:44 -05:00
Luke Parker
99c6375605 fmt 2023-12-03 00:06:13 -05:00
Luke Parker
6e8a5f9cb1 cargo update, remove unneeded dependencies from the processor 2023-12-03 00:05:03 -05:00
Luke Parker
4446a369b1 Remove old TODO 2023-12-03 00:04:58 -05:00
Luke Parker
ce038972df Use as_slice instead of as_ref
I don't know why this didn't trigger for me. Potentially a difference in the
month of the nightly clippy?
2023-12-03 00:04:58 -05:00
Luke Parker
2f6fb93f87 Bridge the gap between the prior two commits 2023-12-03 00:04:58 -05:00
Luke Parker
1e6cb8044c Domain Separate the coordinator's tributary transaction hashes 2023-12-03 00:04:58 -05:00
Luke Parker
1ca66b846a Use multiple nonces in the Tributary 2023-12-03 00:04:58 -05:00
Luke Parker
c82d1283af Move the coordinator to expect multiple nonces
This mirrors how Provided TXs handle topics.

Now, instead of managing a global nonce stream, we can use items such as plan
IDs as topics.

This massively benefits re-attempts, as else we'd need a NOP TX to clear unused
nonces.
2023-12-03 00:04:58 -05:00
Luke Parker
1c3e8af922 Use proper English in Monero RPC constructor 2023-12-02 00:46:34 -05:00
Boog900
28c2b61933 make Monero HTTP RPC timeout configurable. 2023-12-01 23:34:56 -05:00
GitHub Actions
cb1ef0d71f Update nightly 2023-12-01 00:24:51 -05:00
Luke Parker
b823413c9b Use parity-db in current Dockerfiles (#455)
* Use redb and in Dockerfiles

The motivation for redb was to remove the multiple rocksdb compile times from
CI.

* Correct feature flagging of coordinator and message-queue in Dockerfiles

* Correct message-queue DB type alias

* Use consistent table typing in redb

* Correct rebase artifacts

* Correct removal of binaries feature from message-queue

* Correct processor feature flagging

* Replace redb with parity-db

It still has much better compile times yet doesn't block when creating multiple
transactions. It also is actively maintained and doesn't grow our tree. The MPT
aspects are irrelevant.

* Correct stray Redb

* clippy warning

* Correct txn get
2023-11-30 04:22:37 -05:00
Luke Parker
d1122a6535 Fix no-std builds 2023-11-29 03:20:49 -05:00
Luke Parker
8040fedddf Have simple-request Response's borrow the Client to ensure it's not prematurely dropped 2023-11-29 01:16:18 -05:00
Luke Parker
51bb434239 Properly include the non-JSON HTTP result in Monero's RpcError
The CI for 695d1f0ecf actually errored with a
non-JSON response, hence the value in this.
2023-11-29 00:43:59 -05:00
Luke Parker
f0ff3a18d2 Use debug builds in our Dockerfiles to reduce CI times (#462)
* Use debug builds in our Dockerfiles to reduce CI times

Also enables only spawning the mdns service when debug in the coordinator.

* Correct underflow in processor

Prior undetected due to relase builds not having bounds checks enabled.

* Restore Serai release due to CI/RPC failures caused by compiling it in debug mode

This is *probably* worth an issue filed upstream, if it can be tracked down.

* Correct failing debug asserts in Monero

These debug asserts assumed there was a change address to take the remainder.
If there's no change address, the remainder is shunted to the fee, causing the
fee to be distinct from the estimate.

We presumably need to modify monero-serai such that change: None isn't valid,
and users must use Change::Fingerprintable(None).
2023-11-29 00:24:37 -05:00
Luke Parker
695d1f0ecf Remove subxt (#460)
* Remove subxt

Removes ~20 crates from our Cargo.lock.

Removes downloading the metadata and enables removing the getMetadata RPC route
(relevant to #379).

Moves forward #337.

Done now due to distinctions in the subxt 0.32 API surface which make it
justifiable to not update.

* fmt, update due to deny triggering on a yanked crate

* Correct the handling of substrate_block_notifier now that it's ephemeral, not long-lived

* Correct URL in tests/coordinator from ws to http
2023-11-28 02:29:50 -05:00
Luke Parker
571195bfda Resolve #360 (#456)
* Remove NetworkId from processor-messages

Because intent binds to the sender/receiver, it's not needed for intent.

The processor knows what the network is.

The coordinator knows which to use because it's sending this message to the
processor for that network.

Also removes the unused zeroize.

* ProcessorMessage::Completed use Session instead of key

* Move SubstrateSignId to Session

* Finish replacing key with session
2023-11-26 12:14:23 -05:00
Luke Parker
b79cf8abde Move message-queue to a fully binary representation (#454)
* Move message-queue to a fully binary representation

Additionally adds a timeout to the message queue test.

* coordinator clippy

* Remove contention for the message-queue socket by using per-request sockets

* clippy
2023-11-26 11:22:18 -05:00
Luke Parker
c6c74684c9 Increase timeouts in processor tests
For some reason, these constantly failed for me while waiting for the key pair
to confirm. This adds a sleep during the mining process, to ensure blocks
actually have time between them, and mines several more blocks to handle the
median code recently added.
2023-11-25 04:09:07 -05:00
Luke Parker
de14687a0d Fix the processor's Monero time monotonicity
Monero doesn't assert the time increases with each block, solely that it
doesn't decrease. Now, the block number is added to the time to ensure it
increases.
2023-11-25 04:07:31 -05:00
Luke Parker
d60e007126 Add a binaries feature to the processor to reduce dependencies when used as a lib
processor isn't intended to be used as a library, yet serai-processor-tests
does pull it in as a lib. This caused serai-processor-tests to need to compile
rocksdb, which added multiple minutes to the compilation time.
2023-11-25 04:04:52 -05:00
Luke Parker
b296be8515 Replace bincode with borsh (#452)
* Add SignalsConfig to chain_spec

* Correct multiexp feature flagging for rand_core std

* Remove bincode for borsh

Replaces a non-canonical encoding with a canonical encoding which additionally
should be faster.

Also fixes an issue where we used bincode in transcripts where it cannot be
trusted.

This ended up fixing a myriad of other bugs observed, unfortunately.
Accordingly, it either has to be merged or the bug fixes from it must be ported
to a new PR.

* Make serde optional, minimize usage

* Make borsh an optional dependency of substrate/ crates

* Remove unused dependencies

* Use [u8; 64] where possible in the processor messages

* Correct borsh feature flagging
2023-11-25 04:01:11 -05:00
Luke Parker
6b2876351e Add file meant for prior commit 2023-11-24 21:41:59 -05:00
Luke Parker
0ea90d054d Enable the signals pallet to halt networks' publication of Batchs
Relevant to #394.

Prevents hand-over due to hand-over occurring via a `Batch` publication.

Expects a new protocol to restore functionality (after a retirement of the
current protocol).
2023-11-24 19:56:57 -05:00
Luke Parker
eb1d00aa55 Update TX format in coordinator test 2023-11-23 11:45:00 -05:00
Luke Parker
372149c2cc Various simplifications re: Serai transactions
Removes PairSigner for the pair directly.

Resets spec_version to 1.

Defines the extrinsic without its length prefix, only prefixing during publish.
2023-11-23 00:02:01 -05:00
Luke Parker
c6cf33e370 Install wasm toolchain before ADDing files to docker images
Enables caching the wasm toolchain.
2023-11-22 18:07:58 -05:00
Luke Parker
f58478ad87 Add hex as a dependency to serai-client 2023-11-22 18:06:10 -05:00
Luke Parker
88a1726399 Fixes for prior commit 2023-11-22 16:24:50 -05:00
Luke Parker
08e6669403 Replace substrate/client's use of Payload with usage of RuntimeCall
Gains explicit typing.
2023-11-22 11:23:04 -05:00
akildemir
fcfdadc791 Integrate session pallet into validator-sets pallet (#440)
* remove pallet-session

* Store key shares in InSet

* integrate grandpa to vs-pallet

* integrate pallet babe

* remove pallet-session & authority discovery from runtime

* update the grandpa pallet path

* cargo update grandpa

* cargo update substrate

* Misc tweaks

Sets validators for BABE/GRANDPA in chain_spec, per Akil's realization that was
the missing piece.

* fix pr comments

* bug fix & tidy up

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-22 06:22:46 -05:00
Luke Parker
07c657306b Remove duplicated genesis presence in Tributary Scanner DB keys
This wasted 32-bytes per every single entry in the DB (ignoring de-duplication
possible by the DB layer).
2023-11-22 04:43:49 -05:00
David Bell
9df8c9476e Convert tributary to use create_db macro (#448)
* chore: convert tributary to use create_db macro

* chore: fix fmt

* chore: break long line
2023-11-22 04:17:51 -05:00
econsta
9ab2a2cfe0 processor/db.rs macro implimentation (#437)
* processor/db.rs macro implimentation

* ran clippy and fmt

* incorporated recommendations

* used empty uple instead of [u8; 0]

* ran fmt
2023-11-22 03:58:26 -05:00
Luke Parker
a315040aee if-watch 3.2.0 2023-11-22 01:24:17 -05:00
Luke Parker
1822e31142 Grow the yamux buffers to exceed the maximum message size 2023-11-21 02:01:41 -05:00
Luke Parker
6efc313d76 Add/update msrv for common/*, crypto/*, coins/*, and substrate/*
This includes all published crates.
2023-11-21 01:19:40 -05:00
Luke Parker
4b85d4b03b Rename SubstrateSigner to BatchSigner 2023-11-20 22:21:52 -05:00
econsta
05d8c32be8 Multisig db (#444)
* implement db macro for processor/multisigs/db.rs

* ran fmt

* cargo +nightly fmt

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 21:40:54 -05:00
econsta
8634d90b6b implement db macro for processor/signer.rs (#438)
* implement db macro for processor/signer.rs

* Use ()

* CompletedDb -> CompletionsDb

* () -> &()

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 03:03:45 -05:00
econsta
aa9daea6b0 implement db macro for processor/substrate_signer (#439)
* implement db macro for processor/substrate_signer

* Use ()

* Correct AttemptDb usage of ()

* () -> &()

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-20 03:03:35 -05:00
Luke Parker
942873f99d Non-intrusive cargo update 2023-11-20 02:31:47 -05:00
Luke Parker
7c9b581723 Remove dtolnay's rust-toolchain action (#442)
* Remove dtolnay's rust-toolchain action

I believe our rust-toolchain.toml handles its use case exactly.

I don't believe this'll work, as it'd require rustup install a cargo stub
before any toolchain is installed, yet I want to confirm it doesn't.

* Place quotes around nightly toolchain version

* Put toolchain before options to resolve what appears to be a bug in rustup's help strings

* Add wasm32-unkknown-unknown to clippy workflow
2023-11-20 02:31:22 -05:00
Luke Parker
b37a0db538 Move where the RISC-V toolchain is installed during the no-std workflow 2023-11-19 22:45:51 -05:00
Luke Parker
797604ad73 Replace usage of io::Error::new(io::ErrorKind::Other, with io::Error::other
Newly possible with Rust 1.74.
2023-11-19 18:31:37 -05:00
Luke Parker
05b975dff9 Rust 1.74
Adds a Rust toolchain file to be less disruptive to developers who don't keep
their toolchain synchronized (by now having rustup automatically synchronize).

Hopefully helps resolve how +nightly clippy may pass for the coordinator, yet
building would fail due to stable's (hopefully prior?) failure to model some
async functions re: Send/Sync.

Also adds rust-src as a component in preparation of
https://github.com/paritytech/polkadot-sdk/pull/2217
2023-11-19 17:47:46 -05:00
Luke Parker
74a8df4c7b Add a new primitive of a DB-backed channel
The coordinator already had one of these, albeit implemented much worse than
the one now properly introduced. It had to either be sending or receiving,
whereas the new one can do both at the same time.

This replaces said instance and enables pleasant patterns when implementing the
processor/coordinator.
2023-11-19 02:05:01 -05:00
Luke Parker
bd14db9f76 Version pin foundry
Fixes #116 and #441.
2023-11-19 01:21:43 -05:00
Luke Parker
25c02c1311 Ban Serai from setting keys
They were already banned form publishing `Batch`s, yet the ability to set keys
enabled changing how certain functionality in the coordinator operated.
Removing this malleability ensures operation as expected.
2023-11-18 21:28:16 -05:00
Luke Parker
b018fc432c Add deletion to create_db 2023-11-18 20:56:58 -05:00
Luke Parker
6e4ecbc90c Support trailing commas in create_db 2023-11-18 20:54:37 -05:00
Luke Parker
be48dcc4a4 Use multiple LibP2P topics
We still need a peering protocol... Hopefully, we can read peers off of the
Substrate node's DHT.
2023-11-18 20:37:55 -05:00
Luke Parker
25066437da Attempt to resolve #434
Worsens the coordinator tests' ensuring of the validity of the cosigning
protocol, yet should actually properly model it.
2023-11-18 00:12:36 -05:00
Luke Parker
db49a63c2b Bump zeroize to appease cargo deny 2023-11-16 14:18:00 -05:00
Luke Parker
ee50f584aa Add saner log statements to the coordinator
Disables trace on every single P2P message.

Logs a short-form of each transaction.
2023-11-16 13:37:39 -05:00
Luke Parker
14f3f330db Remove deadlock in cosign_evaluator 2023-11-16 13:32:15 -05:00
Luke Parker
30a77d863f Include non-JSON response from Monero node in error 2023-11-15 22:57:37 -05:00
Luke Parker
a03a1edbff Remove needless transposition in coordinator 2023-11-15 22:49:58 -05:00
Luke Parker
c03afbe03e Downgrade RustCrypto packages which violate Rust's interpretation of semver 2023-11-15 21:24:52 -05:00
Luke Parker
af611a39bf Restore runtime feature to hyper 2023-11-15 20:40:24 -05:00
Luke Parker
0d080e6d34 Remove unused dependency from dex-pallet 2023-11-15 20:24:33 -05:00
Luke Parker
369af0fab5 \#339 addendum 2023-11-15 20:23:19 -05:00
Luke Parker
d25e3d86a2 Make TLS an optional feature of simple-request
Removes 14 crates from the tree when compiling the message-queue client.

Also performs a non-intrusive cargo update.
2023-11-15 17:24:11 -05:00
Luke Parker
96f1d26f7a Add a cosigning protocol to ensure finalizations are unique (#433)
* Add a function to deterministically decide which Serai blocks should be co-signed

Has a 5 minute latency between co-signs, also used as the maximal latency
before a co-sign is started.

* Get all active tributaries we're in at a specific block

* Add and route CosignSubstrateBlock, a new provided TX

* Split queued cosigns per network

* Rename BatchSignId to SubstrateSignId

* Add SubstrateSignableId, a meta-type for either Batch or Block, and modularize around it

* Handle the CosignSubstrateBlock provided TX

* Revert substrate_signer.rs to develop (and patch to still work)

Due to SubstrateSigner moving when the prior multisig closes, yet cosigning
occurring with the most recent key, a single SubstrateSigner can be reused.
We could manage multiple SubstrateSigners, yet considering the much lower
specifications for cosigning, I'd rather treat it distinctly.

* Route cosigning through the processor

* Add note to rename SubstrateSigner post-PR

I don't want to do so now in order to preserve the diff's clarity.

* Implement cosign evaluation into the coordinator

* Get tests to compile

* Bug fixes, mark blocks without cosigners available as cosigned

* Correct the ID Batch preprocesses are saved under, add log statements

* Create a dedicated function to handle cosigns

* Correct the flow around Batch verification/queueing

Verifying `Batch`s could stall when a `Batch` was signed before its
predecessors/before the block it's contained in was cosigned (the latter being
inevitable as we can't sign a block containing a signed batch before signing
the batch).

Now, Batch verification happens on a distinct async task in order to not block
the handling of processor messages. This task is the sole caller of verify in
order to ensure last_verified_batch isn't unexpectedly mutated.

When the processor message handler needs to access it, or needs to queue a
Batch, it associates the DB TXN with a lock preventing the other task from
doing so.

This lock, as currently implemented, is a poor and inefficient design. It
should be modified to the pattern used for cosign management. Additionally, a
new primitive of a DB-backed channel may be immensely valuable.

Fixes a standing potential deadlock and a deadlock introduced with the
cosigning protocol.

* Working full-stack tests

After the last commit, this only required extending a timeout.

* Replace "co-sign" with "cosign" to make finding text easier

* Update the coordinator tests to support cosigning

* Inline prior_batch calculation to prevent panic on rotation

Noticed when doing a final review of the branch.
2023-11-15 16:57:21 -05:00
Luke Parker
79e4cce2f6 Explicitly set features in modular-frost 2023-11-13 05:31:47 -05:00
Luke Parker
0c341e3546 Fix no-std builds 2023-11-13 05:19:53 -05:00
Luke Parker
9f0790fb83 Remove RecommendedTranscript from DKG MuSig
Resolves #391.

Given this code already wasn't modular/composable, this should be overall
equivalent regarding functionality and security. It's much less opinionated
though and has fewer dependencies.
2023-11-13 05:11:40 -05:00
Luke Parker
bb8e034e68 Test historic start times in tendermint-machine
Closes https://github.com/serai-dex/serai/issues/342.

Under ideal network conditions, this is fine. While I won't claim ideal network
conditions will occur IRL, b0fcdd3367 has the
Tributary rebroadcast messages and should brute-force its way into a
functioning system.
2023-11-13 00:43:35 -05:00
Luke Parker
3f7bdaa64b Make the output distribution cache only available under a feature
Enables a mode with reduced memory usage *and* increased safety given current
unsafety of the cache.

Relevant to https://github.com/serai-dex/serai/issues/415.
2023-11-13 00:24:54 -05:00
Luke Parker
351436a258 Dockerfile Parts (#428)
* De-duplicate Dockerfiles by using a bash file to concatenate common parts

Resolves #375.

Dockerfiles are still committed to the repo to avoid a dependency on bash.

* Add a CI job to confirm the committed dockerfiles are the currently generated ones

* Create dedicated Dockerfiles per processor network

Ensures the compromising of network-specific dependencies doesn't lead to a
compromise of the build process for all processors.

* Dockerfile corrections

* Correct call to build processor Docker image in tests/processor
2023-11-12 23:55:15 -05:00
David Bell
c328e5ea68 Convert coordinator/tributary/nonce_decider to use create_db macro (#423)
* chore: convert nonce_deicer to use create_db macro

* Restore pub NonceDecider

* Remove extraneous comma

I forgot to run git commit --amend on the prior commit :/

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 12:04:34 -05:00
Boog900
995734c960 Monero: add more legacy verify functions (#383)
* Add v1 ring sig verifying

* allow calculating signature hash for v1 txs

* add unreduced scalar type with recovery

I have added this type for borromen sigs, the ee field can be a normal
scalar as in the verify function the ee
field is checked against a reduced scalar mean for it to verify as
correct ee must be reduced

* change block major/ minor versions to u8

this matches Monero

I have also changed a couple varint functions to accept the `VarInt`
trait

* expose `serialize_hashable` on `Block`

* add back MLSAG verifying functions

I still need to revert the commit removing support for >1 input MLSAG FULL

This adds a new rct type to separate Full and simple rct

* add back support for multiple inputs for RCT FULL

* comment `non_adjacent_form` function

also added `#[allow(clippy::needless_range_loop)]` around a loop as without a re-write satisfying clippy without it will make the function worse.

* Improve Mlsag verifying API

* fix rebase errors

* revert the changes on `reserialize_chain`
plus other misc changes

* fix no-std

* Reduce the amount of rpc calls needed for `get_block_by_number`.
This function was causing me problems, every now and then a node would return a block with a different number than requested.

* change `serialize_hashable` to give the POW hashing blob.

Monero calculates the POW hash and the block hash using *slightly* different blobs :/

* make ring_signatures public and add length check when verifying.

* Misc improvements and bug fixes

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 10:18:18 -05:00
Luke Parker
54f1929078 Route blame between Processor and Coordinator (#427)
* Have processor report errors during the DKG to the coordinator

* Add RemoveParticipant, InvalidDkgShare to coordinator

* Route DKG blame around coordinator

* Allow public construction of AdditionalBlameMachine

Necessary for upcoming work on handling DKG blame in the processor and
coordinator.

Additionally fixes a publicly reachable panic when commitments parsed with one
ThresholdParams are used in a machine using another set of ThresholdParams.

Renames InvalidProofOfKnowledge to InvalidCommitments.

* Remove unused error from dleq

* Implement support for VerifyBlame in the processor

* Have coordinator send the processor share message relevant to Blame

* Remove desync between processors reporting InvalidShare and ones reporting GeneratedKeyPair

* Route blame on sign between processor and coordinator

Doesn't yet act on it in coordinator.

* Move txn usage as needed for stable Rust to build

* Correct InvalidDkgShare serialization
2023-11-12 07:24:41 -05:00
akildemir
d015ee96a3 Dex improvements (#422)
* remove dex traits&balance types

* remove liq tokens pallet in favor of coins-pallet instance

* fix tests & benchmarks

* remove liquidity tokens trait

* fix CI

* fix pr comments

* Slight renamings

* Add burn_with_instruction as a negative to LiquidityTokens CallFilter

* Remove use of One, Zero, Saturating taits in dex pallet

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-12 06:37:31 -05:00
Luke Parker
a43815f101 Restore Foundry to a test dependency via direct usage of solc 2023-11-12 04:34:45 -05:00
Luke Parker
7f1732c8c0 cargo update to snow 0.9.4 2023-11-12 00:40:32 -05:00
Luke Parker
ed2445390f Replace post-detection of if a Plan is forwarded by noting if it's from the scanner 2023-11-09 14:54:38 -05:00
Luke Parker
52a0c56016 Rename Network::address to Network::external_address
Improves clarity since we now have 4 addresses.
2023-11-09 14:31:46 -05:00
Luke Parker
42e8f2c8d8 Add OutputType::Forwarded to ensure a user's transfer in isn't misclassified
If a user transferred in without an InInstruction, and the amount exactly
matched a forwarded output, the user's output would fulfill the
forwarding. Then the forwarded output would come along, have no InInstruction,
and be refunded (to the prior multisig) when the user should've been refunded.

Adding this new address type resolves such concerns.
2023-11-09 14:24:13 -05:00
Luke Parker
b51204a4eb Replace usage of ethers-signers with 11 lines of ECDSA code 2023-11-09 13:22:43 -05:00
Luke Parker
ec51fa233a Document an accepted false positive 2023-11-09 12:41:15 -05:00
Luke Parker
ce4091695f Document a choice of variable name 2023-11-09 12:38:06 -05:00
Luke Parker
24919cfc54 Resolve race condition regarding when forwarded output is set
The higher-level scanner code in multisigs/mod.rs now creates a series of plans
with limited context. These include forwarding and refunding plans, moving all
handling of forwarding flags on the scanner's clock and therefore safe.

Also simplifies the refunding a decent bit.
2023-11-09 12:37:07 -05:00
Luke Parker
bf41009c5a Document critical race condition due to two distinct clocks operating over the same data 2023-11-09 08:41:22 -05:00
Luke Parker
e8e9e212df Move additional functions which retry until success into Network trait 2023-11-09 07:16:15 -05:00
Luke Parker
19187d2c30 Implement calculation of monotonic network times for Bitcoin and Monero 2023-11-09 07:02:52 -05:00
Luke Parker
43ae6794db Remove invalid TODOs from processor signers 2023-11-09 03:53:30 -05:00
Luke Parker
978134a9d1 Remove events from SubstrateSigner
Same vibes as prior commit.
2023-11-09 01:56:09 -05:00
Luke Parker
2eb155753a Remove the Signer events pseudo-channel for a returned message
Also replaces SignerEvent with usage of ProcessorMessage directly.
2023-11-09 01:26:30 -05:00
Luke Parker
7d72e224f0 Remove Output::amount and move Payment from Amount to Balance
This code is still largely designed around the idea a payment for a network is
fungible with any other, which isn't true. This starts moving past that.

Asserts are added to ensure the integrity of coin to the scheduler (which is
now per key per coin, not per key alone) and in Bitcoin/Monero prepare_send.
2023-11-08 23:33:25 -05:00
Luke Parker
ffedba7a05 Update processor tests to refund logic 2023-11-08 21:59:11 -05:00
Luke Parker
06e627a562 Support refunds as possible for invalidly received outputs on Serai 2023-11-08 11:26:28 -05:00
Luke Parker
11f66c741d Remove ethers-middleware 2023-11-08 08:19:12 -05:00
Luke Parker
a0a2ef22e4 Remove ethers-solc
ethers-solc was used for a type (now manually specified) and to call out to
solc. Since Foundry was already a documented dependency, a call to it now
handles building.

Removing this single crate removes a total of 17 crates from our dependency
tree. While these may still be around due to Foundry, they at least may not
be.

Further work to remove the requirement on Foundry for solc alone would be
appreciated.
2023-11-08 06:25:35 -05:00
Luke Parker
5e290a29d9 Remove frame-benchmarking-cli
Not currently used, notably increases our dependency tree.

I wouldn't remove it if we planned to use it. From my understanding, all
benchmarking will be per pallet, voiding our need to have this for the node.
2023-11-08 05:59:56 -05:00
Luke Parker
a688350f44 Have processor's Network::new sleep until booted, not panic 2023-11-08 03:21:28 -05:00
Luke Parker
bc07e14b1e Remove async_recursion for a for loop 2023-11-07 23:07:26 -05:00
Luke Parker
e1c07d89e0 Retry RPC requests once on error
I don't like blindly retrying in the Monero library. The amount of errors,
which weren't present with reqwest (well, the error rate was the same, yet due
to a distinct bug this code fixed), demand we do *something* though.

The trace log shows hyper is erroring with 0 bytes of the response read. My
guess is it's somehow a closed connection? A connection pool would detect this
and have created a new connection (as this does, except once finding out
there's an issue).

While we should be able to detect this with `ready()`, we do call ready and it
claims no error. We also can successfully write which makes this... a mess.
Hopefully, it either actually works as intended, yet it at least requires two
consecutive errors which should be much less frequent.
2023-11-07 22:55:29 -05:00
Luke Parker
56fd11ab8d Use a single long-lived RPC connection when authenticated
The prior system spawned a new connection per request to enable parallelism,
yet kept hitting hyper::IncompleteMessages I couldn't track down. This
attempts to resolve those by a long-lived socket.

Halves the amount of requests per-authenticated RPC call, and accordingly is
likely still better overall.

I don't believe this is resolved yet but this is still worth pushing.
2023-11-07 17:42:19 -05:00
Luke Parker
c03fb6c71b Add dedicated BatchSignId 2023-11-06 20:06:36 -05:00
Luke Parker
96f94966b7 Restore accidentally deleted function 2023-11-06 18:37:18 -05:00
Luke Parker
b65ba17007 Fix accumulated bugs 2023-11-06 18:12:53 -05:00
Luke Parker
c9003874ad Remove ethers mono-crate
Reduces size of ethereum-serai and gives us clarity on what's used.

Next should be rmeoving the ethers-provided signing code.
2023-11-06 17:30:50 -05:00
Luke Parker
205bec36e5 try_from -> from 2023-11-06 17:00:09 -05:00
Luke Parker
df8b455d54 Don't generate RuntimeCall::System
Completely unused yet would be permanently part of our protocol if left alone.
2023-11-06 16:59:30 -05:00
Luke Parker
84a0bcad51 Move monero-serai to simple-request
Deduplicates code across the entire repo, letting us make improvements in a
single place.
2023-11-06 11:45:33 -05:00
Luke Parker
b680bb532b Don't default to basic-auth if it's enabled, yet require it to be specified 2023-11-06 10:42:01 -05:00
Luke Parker
b9983bf133 Replace reqwest with simple-request
reqwest was replaced with hyper and hyper-rustls within monero-serai due to
reqwest *solely* offering a connection pool API. In the process, it was
demonstrated how quickly we can achieve equivalent functionality to reqwest for
our use cases with a fraction of the code.

This adds our own reqwest alternative to the tree, applying it to both
bitcoin-serai and message-queue. By doing so, bitcoin-serai decreases its tree
by 21 packages and the processor by 18. Cargo.lock decreases by 8 dependencies,
solely adding simple-request. Notably removed is openssl-sys and openssl.

One noted decrease functionality is the requirement on the system having
installed CA certificates. While we could fallback to the rustls certificates
if the system doesn't have any, that's blocked by
https://github.com/rustls/hyper-rustls/pulls/228.
2023-11-06 09:47:12 -05:00
Luke Parker
cddb44ae3f Bitcoin tweaks + cargo update
Removes bitcoin-serai's usage of sha2 for bitcoin-hashes. While sha2 is still
in play due to modular-frost (more specifically, due to ciphersuite), this
offers a bit more performance (assuming equivalency between sha2 and
bitcoin-hashes' impl) due to removing a static for a const.

Makes secp256k1 a dev dependency for bitcoin-serai. While secp256k1 is still
pulled in via bitcoin, it's hopefully slightly better to compile now and makes
usage of secp256k1 an implementation detail of bitcoin (letting it change it
freely).

Also offers slightly more efficient signing as we don't decode to a signature
just to re-encode for the transaction.

Removes a 20s sleep for a check every second, up to 20 times, for reduced test
times in the processor.
2023-11-06 07:38:36 -05:00
hinto.janai
bd3272a9f2 replace lazy_static! with once_cell::sync::Lazy 2023-11-06 05:31:46 -05:00
Luke Parker
de41be6e26 Slash on SignCompleted for unrecognized plan 2023-11-05 13:42:01 -05:00
Luke Parker
b8ac8e697b Add missing crate to tests, remove no longer present RUSTSEC ignore 2023-11-05 12:11:08 -05:00
akildemir
899a9604e1 Add Dex pallet (#407)
* Move pallet-asset-conversion

* update licensing

* initial integration

* Integrate Currency & Assets types

* integrate liquidity tokens

* fmt

* integrate dex pallet tests

* fmt

* compilation error fixes

* integrate dex benchmarks

* fmt

* cargo clippy

* replace all occurrences of "asset" with "coin"

* add the actual add liq/swap logic to in-instructions

* add client side & tests

* fix deny

* Lint and changes

- Renames InInstruction::AddLiquidity to InInstruction::SwapAndAddLiquidity
- Makes create_pool an internal function
- Makes dex-pallet exclusively create pools against a native coin
- Removes various fees
- Adds new crates to GH workflow

* Fix rebase artifacts

* Correct other rebase artifact

* Correct CI specification for liquidity-tokens

* Correct primitives' test to the standardized pallet account scheme

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 12:02:34 -05:00
David Bell
facb5817c4 Database Macro (#408)
* db_macro

* wip: converted prcessor/key_gen to use create_db macro

* wip: converted prcessor/key_gen to use create_db macro

* wip: formatting

* fix: added no_run to doc

* fix: documentation example had extra parenths

* fix: ignore doc test entirely

* Corrections from rebasing

* Misc lint

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 09:47:24 -05:00
akildemir
97fedf65d0 add reasons to slash evidence (#414)
* add reasons to slash evidence

* fix CI failing

* Remove unnecessary clones

.encode() takes &self

* InvalidVr to InvalidValidRound

* Unrelated to this PR: Clarify reasoning/potentials behind dropping evidence

* Clarify prevotes in SlashEvidence test

* Replace use of read_to_end

* Restore decode_signed_message

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-05 00:04:41 -04:00
Luke Parker
257323c1e5 log::debug all Monero RPC errors 2023-11-05 00:02:58 -04:00
Luke Parker
deecd77aec Don't drop the request sender before we finish reading the response
It *looks like* hyper will drop the connection once its request sender is
dropped, regardless of if the last request hasn't had its response completed.
This attempts to resolve some spurious connection errors.
2023-11-04 22:55:47 -04:00
Luke Parker
360b264a0f Remove unused dependencies 2023-11-04 19:26:38 -04:00
Luke Parker
e05b77d830 Support multiple key shares per validator (#416)
* Update the coordinator to give key shares based on weight, not based on existence

Participants are now identified by their starting index. While this compiles,
the following is unimplemented:

1) A conversion for DKG `i` values. It assumes the threshold `i` values used
will be identical for the MuSig signature used to confirm the DKG.
2) Expansion from compressed values to full values before forwarding to the
processor.

* Add a fn to the DkgConfirmer to convert `i` values as needed

Also removes TODOs regarding Serai ensuring validator key uniqueness +
validity. The current infra achieves both.

* Have the Tributary DB track participation by shares, not by count

* Prevent a node from obtaining 34% of the maximum amount of key shares

This is actually mainly intended to set a bound on message sizes in the
coordinator. Message sizes are amplified by the amount of key shares held, so
setting an upper bound on said amount lets it determine constants. While that
upper bound could be 150, that'd be unreasonable and increase the potential for
DoS attacks.

* Correct the mechanism to detect if sufficient accumulation has occured

It used to check if the latest accumulation hit the required threshold. Now,
accumulations may jump past the required threshold. The required mechanism is
to check the threshold wasn't prior met and is now met.

* Finish updating the coordinator to handle a multiple key share per validator environment

* Adjust stategy re: preventing noce reuse in DKG Confirmer

* Add TODOs regarding dropped transactions, add possible TODO fix

* Update tests/coordinator

This doesn't add new multi-key-share tests, it solely updates the existing
single key-share tests to compile and run, with the necessary fixes to the
coordinator.

* Update processor key_gen to handle generating multiple key shares at once

* Update SubstrateSigner

* Update signer, clippy

* Update processor tests

* Update processor docker tests
2023-11-04 19:26:13 -04:00
Luke Parker
5970a455d0 Replace crc dependency with our own crc implementation
It's ~30 lines to remove 2 crates in our tree.
2023-11-03 06:44:23 -04:00
Luke Parker
4c9e3b085b Add a String to Monero ConnectionErrors debugging the issue
We're reaching this in CI so there must be some issue present.
2023-11-03 05:45:33 -04:00
github-actions[bot]
a2089c61fb November 2023 - Rust Nightly Update (#413)
* Update nightly

* Replace .get(0) with .first()

* allow new clippy lint

---------

Co-authored-by: GitHub Actions <>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-11-03 05:28:07 -04:00
Luke Parker
ae449535ff Correct no-std builds 2023-10-31 07:55:25 -04:00
Luke Parker
05dc474cb3 Correct std feature-flagging
If a crate has std set, it should enable std for all dependencies in order to
let them properly select which algorithms to use. Some crates fallback to
slower/worse algorithms on no-std.

Also more aggressively sets default-features = false leading to a *10%*
reduction in the amount of crates coordinator builds.
2023-10-31 07:44:02 -04:00
Luke Parker
34bcb9eb01 bitcoin 0.31 2023-10-31 03:47:45 -04:00
Luke Parker
2958f196fc ed448 with generic-array 1.0 2023-10-29 09:07:14 -04:00
Luke Parker
92cebcd911 Add ca-certificates to processor Docker image 2023-10-28 04:06:00 -04:00
Luke Parker
a3278dfb31 Add ca-certificates as a dependency to the CI 2023-10-28 00:45:00 -04:00
Luke Parker
f02db19670 cargo update
Removes a pair of duplicated dependencies.
2023-10-27 23:14:54 -04:00
Luke Parker
652c878f54 Note slight malleability in batch verification 2023-10-27 23:08:31 -04:00
Luke Parker
da01de08c9 Restore logger init in processor tests 2023-10-27 23:08:06 -04:00
Luke Parker
052ef39a25 Replace reqwest with hyper in monero-serai
Ensures a connection pool isn't used behind-the-scenes, as necessitated by
authenticated connections.
2023-10-27 23:05:47 -04:00
Luke Parker
87fdc8ce35 Use Build Dependencies over Test Dependencies where we can 2023-10-26 15:15:29 -04:00
Luke Parker
86ff0ae71b No longer run processor tests again when testing against 0.17.3.2
Even though the intent was to test against 0.17.3.2, and a Monero 0.17.3.2 node
was running, the processor now uses docker which will always use 0.18.
Accordingly, while the intent was valid, it was pointless.

This is unfortunate, as testing against 0.17 helped protect against edge cases.
The infra to preserve their tests isn't worth the benefit we'd gain from said
tests however.
2023-10-26 14:29:51 -04:00
Luke Parker
3069138475 Fix handling of Monero daemon connections when using an authenticated RPC
The lack of locking the connection when making an authenticated request, which
is actually two sequential requests, risked another caller making a request in
between, invalidating the state.

Now, only unauthenticated connections share a connection object.
2023-10-26 12:45:39 -04:00
Luke Parker
f22aedc007 Add tweaks meant for prior commit 2023-10-24 04:34:47 -04:00
Luke Parker
7be20c451d cargo update, resolving two yanked crates (ahash, lalrpop) 2023-10-24 04:03:51 -04:00
Luke Parker
0198d4cc46 Add a TributaryState struct with higher-level DB logic 2023-10-24 03:55:13 -04:00
Luke Parker
7c10873cd5 Tweak how the Monero node is run for the processor tests
Disables the unused zmq RPC.

Removes authentication which seems to be unstable as hell when under load
(see #351).

No longer use Network::Isolated as it's not needed here (the Monero nodes run
with `--offline`).
2023-10-23 07:56:55 -04:00
Luke Parker
08180cc563 Resolve #405 2023-10-23 07:56:43 -04:00
Luke Parker
c4bdbdde11 dockertest 0.4 (#406)
* Updates to modern dockertest

* More updates to latest dockertest

* Update Cargo.lock to dockertest with handle restored

* clippy coordinator tests

* clippy full-stack tests

* Remove kayabaNerve branch for official repo's latest commit hash

* Update serai-client, remove reliance on the existence of a handle fn

* Don't use the hex encoding of unique_id in dockertests

Gets our hostnames just below 64 bytes, resolving test failures on at least
Debian-based systems.

* Use Network::Isolated for all dockertest instances

* Correct error from prior commit's edits
2023-10-23 06:59:38 -04:00
Luke Parker
0d23964762 Resolve #335 2023-10-23 05:10:13 -04:00
Luke Parker
fbf51e53ec Resolve #327
Also runs `cargo update` and moves where we install the wasm toolchain in the
Dockerfile for better caching properties.
2023-10-23 00:45:00 -04:00
Luke Parker
fd1826cca9 Implement a fee on every input to prevent prior described economic attacks
Completes #297.
2023-10-22 21:31:13 -04:00
Luke Parker
f561fa9ba1 Fix a bug which would attempt to create a transaction with N::MAX_INPUTS + 1 2023-10-22 19:15:52 -04:00
Luke Parker
f4fc539e14 Remove constants inlined into bitcoin-serai for bitcoin::policy-provided constants 2023-10-22 18:08:36 -04:00
Luke Parker
0fff5391a8 Improve the reasoning for why the Bitcoin DUST constant is set as it is
Also halves the minimum fee policy, which still may be 2x-4x higher than
necessary due to API limitations within bitcoin-serai (which we can fix as it's
within our scope).
2023-10-22 18:06:44 -04:00
Luke Parker
a71a789912 Monero median_fee fn 2023-10-22 17:43:21 -04:00
Luke Parker
83c41eccd4 Bitcoin Dust constant justification, median_fee fn 2023-10-22 07:03:33 -04:00
Luke Parker
e5113c333e Move LastBatchBlock, LastBatch sets by their checks 2023-10-22 05:49:19 -04:00
Luke Parker
6068978676 Use a transaction layer when executing each InInstruction 2023-10-22 05:46:03 -04:00
Luke Parker
e7e30150f0 Don't let the Serai set publish batches 2023-10-22 05:38:44 -04:00
Luke Parker
55fe27f41a Don't allow the Bitcoin set to mint sriETH 2023-10-22 05:37:23 -04:00
Luke Parker
d66a7ee43e Remove the staking pallet for validator-sets alone
The staking pallet is an indirection which offered no practical benefit yet
increased the overhead of every call.
2023-10-22 04:00:42 -04:00
Luke Parker
a702d65c3d validator-sets pallet event expansion 2023-10-22 03:28:42 -04:00
Luke Parker
52eb68677a pallet-staking event 2023-10-22 03:19:01 -04:00
Luke Parker
d29d19bdfe Ensure pallet-session is initialized after staking 2023-10-22 02:52:34 -04:00
Luke Parker
c3fdb9d9df Use a comprehensive call filter in the runtime 2023-10-21 21:35:14 -04:00
Luke Parker
46e1f85085 Remove unnecessary ENV flags from Bitcoin Dockerfile 2023-10-21 20:12:41 -04:00
Luke Parker
1bff2a0447 Add signals pallet
Resolves #353

Implements code such that:

- 80% of validators (by stake) must be in favor of a signal for the network to
  be
- 80% of networks (by stake) must be in favor of a signal for it to be locked
  in
- After a signal has been locked in for two weeks, the network halts

The intention is to:

1) Not allow validators to unilaterally declare new consensus rules.

No method of declaring new consensus rules is provided by this pallet. Solely a
way to deprecate the current rules, with a signaled for successor. All nodes
must then individually decide whether or not to download and run a new node
which has new rules, and if so, which rules.

2) Not place blobs on chain.

Even if they'd be reproducible, it's just a lot of data to chuck on the
blockchain.
2023-10-21 20:06:55 -04:00
Luke Parker
b66203ae3f Update Bitcoin Docker image to 25.1
Also decreases the Bitcoin dummy fee.
2023-10-20 18:52:43 -04:00
Luke Parker
43a182fc4c Reduce dummy fee used by Monero 2023-10-20 17:57:02 -04:00
Luke Parker
8ead7a2581 Add missing std feature flags to a couple Substrate dependencies 2023-10-20 17:53:07 -04:00
Luke Parker
3797679755 Track total allocated stake in validator-sets pallet 2023-10-20 16:58:44 -04:00
Luke Parker
c056b751fe Remove Fee from the Network API
The only benefit to having it would be the ability to cache it across
prepare_send, which can be done internally to the Network.
2023-10-20 16:12:28 -04:00
Luke Parker
5977121c48 Don't mutate Plans when signing
This is achieved by not using the Plan struct anymore, yet rather its
decomposition. While less ergonomic, it meets our wants re: safety.
2023-10-20 10:56:18 -04:00
Luke Parker
7b6181ecdb Remove Plan ID non-determinism leading Monero to have distinct TX fees
Monero would select decoys with a new RNG seed, which may have used more bytes,
increasing the fee.

There's a few comments here.

1) Non-determinism wasn't removed via distinguishing the edits. It was done by
   removing part of the transcript. A TODO exists to improve this.
2) Distinct TX fees is a test failure, not an issue in prod *unless* the distinct
   fee is greater. So long as the distinct fee is lesser, it's fine.
3) Removing outputs is expected to only decrease fees.
2023-10-20 08:11:42 -04:00
EmmanuelChthonic
f976bc86ac fix key fn not being called in getter 2023-10-20 07:34:19 -04:00
Luke Parker
441bf62e11 Simplify amortize_fee, correct scheduler's amortizing of branch fees 2023-10-20 05:40:16 -04:00
Luke Parker
4852dcaab7 Move common code from prepare_send into Network trait 2023-10-20 04:42:08 -04:00
Luke Parker
d6bc1c1ea3 Explicitly only adjust operating costs when plan.change.is_some()
The existing code should've mostly handled this fine. Only a single edge case
(TX fee reduction on no-change Plans) would cause an improper increase in
operating costs.
2023-10-19 23:16:04 -04:00
Luke Parker
7b2dec63ce Don't scan outputs which are dust, track dust change as operating costs
Fixes #299.
2023-10-19 08:02:10 -04:00
Luke Parker
d833254b84 Other clippy fixes 2023-10-19 08:01:15 -04:00
Luke Parker
a1b2bdf0a2 clippy fixes 2023-10-19 06:30:58 -04:00
Luke Parker
43841f95fc cargo fmt 2023-10-19 06:24:52 -04:00
akildemir
fdfce9e207 Coins pallet (#399)
* initial implementation

* add function to get a balance of an account

* add support for multiple coins

* rename pallet to "coins-pallet"

* replace balances, assets and tokens pallet with coins pallet in runtime

* add total supply info

* update client side for new Coins pallet

* handle fees

* bug fixes

* Update FeeAccount test

* Fmt

* fix pr comments

* remove extraneous Imbalance type

* Minor tweaks

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-19 06:22:21 -04:00
Luke Parker
3255c0ace5 Track and amortize operating costs to ensure solvency
Implements most of #297 to the point I'm fine closing it. The solution
implemented is distinct than originally designed, yet much simpler.

Since we have a fully-linear view of created transactions, we don't have to
per-output track operating costs incurred by that output. We can track it
across the entire Serai system, without hooking into the Eventuality system.

Also updates documentation.
2023-10-19 03:13:44 -04:00
Luke Parker
057c3b7cf1 libp2p 0.52.4
Adds several packages into tree, deprecates an API we use. This commit does
update accordingly.

While this may not be preferable, it is inevitable.
2023-10-19 00:27:21 -04:00
Luke Parker
a29c410caf cargo update parking_lot_core, adding redox_syscall 2023-10-19 00:04:13 -04:00
Luke Parker
a6f40978cb cargo update regex, adding regex-syntax 2023-10-19 00:02:55 -04:00
Luke Parker
bdf6afc144 cargo update time, adding dep powerfmt 2023-10-19 00:02:27 -04:00
Luke Parker
a362b0a451 Non-intrusive cargo update
Updates yanked wasm-opt, updates tracing due to use-after-free, updates
everything else which doesn't grow the tree.
2023-10-18 21:45:49 -04:00
Luke Parker
041ed46171 Correct panic possible when jumping to a round with Precommit(None) 2023-10-18 16:46:14 -04:00
Luke Parker
9accddb2d7 cargo update libp2p-identity
Finally removes curve25519-dalek 3.2.
2023-10-16 15:07:39 -04:00
Luke Parker
e06e4edc8e Modernize protocol documentation 2023-10-16 05:11:31 -04:00
Luke Parker
e4d59eeeca Remove hashbrown from validator-sets pallet 2023-10-16 01:47:15 -04:00
Luke Parker
1f5b1bb514 Update to lazy_static 1.5.0
Requires a git dependency due to
https://github.com/rust-lang-nursery/lazy-static.rs/issues/201.

Justified due to bumping the internally spin to 0.9 (containing a bug fix for
0.9 in general (https://github.com/rust-lang-nursery/lazy-static.rs/pull/199)),
surpassing 0.7 (offering performance
(https://github.com/rust-lang-nursery/lazy-static.rs/pull/182)).
2023-10-16 01:39:15 -04:00
Luke Parker
1a3b6005c6 Various cargo updates 2023-10-15 23:57:45 -04:00
Luke Parker
7dc1a24bce Move DkgConfirmer to its own file, document 2023-10-15 01:39:56 -04:00
Luke Parker
3483f7fa73 Call fatal_slash where easy and appropriate 2023-10-15 00:32:51 -04:00
Luke Parker
a300a1029a Load/save first_preprocess with RecognizedIdType
Enables their IDs to have conflicts across each other.
2023-10-14 21:58:10 -04:00
Luke Parker
7409d0b3cf Rename add_active_tributary for clarity 2023-10-14 21:53:38 -04:00
Luke Parker
19e90b28b0 Have Tributary's add_transaction return a proper error
Modifies main.rs to properly handle the returned error.
2023-10-14 21:50:11 -04:00
Luke Parker
584943d1e9 Modify SubstrateBlockAck as needed
Replaces plan IDs with key + ID, letting the coordinator determine the sessions
for the plans.

Properly scopes which plan IDs are set on which tributaries, and ensures we
have the necessary tributaries at time of handling.
2023-10-14 20:37:54 -04:00
Luke Parker
62e1d63f47 Abort the P2P meta task when dropped
This should cause full cleanup of all Tributary async tasks, since the machine
already cleans itself up on drop.
2023-10-14 20:08:51 -04:00
Luke Parker
e4adaa8947 Further tweaks re: retiry 2023-10-14 19:55:14 -04:00
Luke Parker
3b3fdd104b Most of coordinator Tributary retiry
Adds Event::SetRetired to validator-sets.

Emit TributaryRetired.

Replaces is_active_set, which made multiple network requests, with
is_retired_tributary, a DB read.

Performs most of the removals necessary upon TributaryRetired.

Still needs to clean up the actual Tributary/Tendermint tasks.
2023-10-14 16:47:25 -04:00
Luke Parker
5897efd7c7 Clean out create_new_tributary
It made sense when the task was in main.rs. Now that it isn't, it's a pointless
indirection.
2023-10-14 16:09:24 -04:00
Luke Parker
863a7842ca Have every node respond to Heartbeat so they don't download the messages over the net 2023-10-14 15:27:40 -04:00
Luke Parker
f414735be5 Redo new_tributary from being over ActiveTributary to TributaryEvent
TributaryEvent also allows broadcasting a retiry event.
2023-10-14 15:27:39 -04:00
Luke Parker
5c5c097da9 Tweaks for processor to work with the new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
7d4e8b59db Update dockertests to new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
e3e9939eaf Tidy Serai use in coordinator to new API 2023-10-14 15:26:36 -04:00
Luke Parker
530fba51dd Update coordinator to new serai-client 2023-10-14 15:26:36 -04:00
Luke Parker
cb61c9052a Reorganize serai-client
Instead of functions taking a block hash, has a scope to a block hash before
functions can be called.

Separates functions by pallets.
2023-10-14 15:26:36 -04:00
Luke Parker
96cc5d0157 Remove a TODO re: an unhandled race condition 2023-10-14 00:41:07 -04:00
Luke Parker
7275a95907 Break handle_processor_messages out to handle_processor_message, move a helper fn to substrate 2023-10-13 23:36:07 -04:00
Luke Parker
80e5ca9328 Move heartbeat_tributaries and handle_p2p to p2p.rs 2023-10-13 22:40:11 -04:00
Luke Parker
67951c4971 Localize scan_substrate as substrate::scan_task 2023-10-13 22:31:54 -04:00
Luke Parker
4143fe9f47 Move scan_tributaries, shrinking coordinator's main.rs 2023-10-13 22:30:13 -04:00
Luke Parker
a73b19e2b8 Tweak coordinator test timing 2023-10-13 21:46:26 -04:00
Luke Parker
97c328e5fb Check tributaries are active before declaring them relevant 2023-10-13 21:46:17 -04:00
Luke Parker
96c397caa0 Add content-based deduplication to the tests' shimmed P2P
The tests have recently had their timing stilted, causing failures. The tests
are... fine. They're fragile, as obvious, yet they're logical. The simplest fix
is to unstilt their timing rather to make them non-fragile.

The recent change, which presumably caused said stilting, was the the
rebroadcasting added. This de-duplication prevents most of the impact of
rebroadcasting. While there's still the async task, and the lock acquisition on
attempt to rebroadcast, this hopefully is enough.
2023-10-13 19:47:58 -04:00
akildemir
d5c6ed1a03 Improve provided handling (#381)
* fix typos

* remove tributary sleeping

* handle not locally provided txs

* use topic number instead of waiting list

* Clean-up, fixes

1) Uses a single TXN in provided
2) Doesn't continue on non-local provided inside verify_block, skipping further
   execution of checks
3) Upon local provision of already on-chain TX, compares

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-13 19:45:47 -04:00
Luke Parker
f6e8bc3352 Alternate handover batch TOCTOU fix (#397)
* Revert "Correct the prior documented TOCTOU"

This reverts commit d50fe87801.

* Correct the prior documented TOCTOU

d50fe87801 edited the challenge for the Batch to
fix it. This won't produce Batch n+1 until Batch n is successfully published
and verified. It's an alternative strategy able to be reviewed, with a much
smaller impact to scope.
2023-10-13 12:14:59 -04:00
Luke Parker
7d0d1dc382 Replace solc-select with svm-rs in CI and docs
svm-rs is already in tree as a library, so we may as well include it as a bin
instead of also pulling in solc-select.
2023-10-13 08:56:25 -04:00
Luke Parker
d50fe87801 Correct the prior documented TOCTOU
Now, if a malicious validator set publishes a malicious `Batch` at the last
moment, it'll cause all future `Batch`s signed by the next validator set to
require a bool being set (yet they never will set it).

This will prevent the handover.

The only overhead is having two distinct `batch_message` calls on-chain.
2023-10-13 04:41:01 -04:00
Luke Parker
e6aa9df428 Document TOCTOU allowing malicious validator set to trigger a handover to an honest set 2023-10-13 04:14:36 -04:00
Luke Parker
02edfd2935 Verify all Batchs published by the prior set
The new set publishing a `Batch` completes the handover protocol. The new set
should only publish a `Batch` once it believes the old set has completed all of
its on-external-chain activity, marking it honest and finite.

With the handover comes the acceptance of liability, hence the requirement for
all of the on-Serai-chain activity also needing verification. While most
activity would be verified in-real-time (upon ::Batch messages), the new set
will now explicitly verify the complete set of `Batch`s before beginning its
preprocess for its own `Batch` (the one accepting the handover).
2023-10-13 04:12:21 -04:00
Luke Parker
9aeece5bf6 Give one weight per key share to validators in Tributary 2023-10-13 02:29:22 -04:00
Luke Parker
bb84f7cf1d Correct ValidatorSets genesis 2023-10-13 01:42:26 -04:00
Luke Parker
bb25baf3bc Add logic to amortize excess key shares, correcting is_bft 2023-10-13 01:04:41 -04:00
Luke Parker
013a0cddfc MAX_VALIDATORS_PER_SET -> MAX_KEY_SHARES_PER_SET 2023-10-13 00:50:07 -04:00
Luke Parker
ed7300b406 Explicitly provide a pre_dispatch which calls validate_unsigned
pre_dispatch is guaranteed by documentation to be called and persisted.
validate_unsigned is not, though the provided pre_dispatch does by default call
validate_unsigned. By explicitly providing our own pre_dispatch, we accomplish
the bounds we require and expect, only being invalidated on Substrate
redefining their API.

We should still test this, yet since we call retire_session in
validate_unsigned, any test of rotation will test it's being properly called.
2023-10-13 00:31:23 -04:00
Luke Parker
88b5efda99 cargo fmt 2023-10-13 00:12:10 -04:00
Luke Parker
0712e6f107 Localize stake into networks
Sets a stake requirement of 100k for Serai and Monero, as Serai doesn't have
stake requirements and Monero isn't expected to see as much
volume/institutional support as Bitcoin/Ethereum.
2023-10-13 00:04:30 -04:00
Luke Parker
6a4c57e86f Define an array of all NetworkIds in serai_primitives 2023-10-12 23:59:21 -04:00
Luke Parker
b7746aa71d Don't allow (de)allocations which remove fault tolerance 2023-10-12 23:47:00 -04:00
Luke Parker
8dd41ee798 Allow immediate deallocation if the decrease doesn't cross a key-share threshold 2023-10-12 23:06:20 -04:00
Luke Parker
9a1d10f4ea Error if deallocation would remove fault tolerance 2023-10-12 23:05:29 -04:00
Luke Parker
6587590986 Grab up to 150 key shares of validators, not 150 validators 2023-10-12 22:44:10 -04:00
Luke Parker
b0fcdd3367 Regularly rebroadcast consensus messages to ensure presence even if dropped from the P2P layer
Attempts to fix #342, #382.
2023-10-12 22:14:42 -04:00
Luke Parker
15edea1389 Use an inner task to spawn Tributarys to minimize latency 2023-10-12 21:55:25 -04:00
Luke Parker
1d9e2efc33 Don't unwrap result of call which makes network requests 2023-10-12 18:49:49 -04:00
Luke Parker
f25f5cd368 Add a sleep statement to Batch publication errors to prevent log flooding/node hammering 2023-10-12 18:39:46 -04:00
Luke Parker
f847ac7077 Update to if-watch 3.1.0
Has a delta of -4 packages in tree.

Offers a potential to no longer have two sets of windows in-tree once packages
using 0.48 update to 0.51.
2023-10-12 18:37:32 -04:00
Luke Parker
29fcf6be4d Support immediate deallocations for non-active validators 2023-10-12 00:51:18 -04:00
Luke Parker
108e2b57d9 Add claim_deallocation to the staking pallet 2023-10-12 00:26:35 -04:00
Luke Parker
3da5577950 Only allow deallocations after the next set after the validator's inclusion starts, plus a one session cooldown period
Part of #394.
2023-10-11 23:42:15 -04:00
Luke Parker
f692047b8b Rename validators to select_validators 2023-10-11 17:23:09 -04:00
Luke Parker
2401266374 Replace mutate with get + set
I'm legitimately unsure why mutate doesn't work. Reading the impls, it should...
2023-10-11 02:11:44 -04:00
Luke Parker
ed90d1752a Minimal CI Attempts (#393)
* Add quotes around globs

* Don't remove current kernel

* Remove nvm due to nvme, go -> golang
2023-10-11 02:11:34 -04:00
Luke Parker
3261fde853 Remove homebrew from packages to remove 2023-10-11 01:19:04 -04:00
Luke Parker
7492adc473 Attempt to further minimize size of GH CI 2023-10-11 01:17:50 -04:00
Luke Parker
04f9a1fa31 Correct handling of InSet's keys 2023-10-11 01:05:48 -04:00
Luke Parker
13cbc99149 Properly define the on-chain handover protocol
The new key publishing `Batch`s is more than sufficient.

Also uses the correct key to verify the published `Batch`s authenticity.
2023-10-10 23:55:59 -04:00
Luke Parker
1a0b4198ba Correct the check for if we still need to set keys
The prior check had an edge case where once keys were pruned, it'd believe the
keys needed to be set ad infinitum.
2023-10-10 22:53:15 -04:00
Luke Parker
22371a6585 Also reinstall python3-pip 2023-10-10 22:34:19 -04:00
Luke Parker
b2d6a85ac0 Still remove sqlite3 to cause the majority of uninstalls 2023-10-10 22:31:07 -04:00
Luke Parker
44ca5e6520 Manually reinstall python3 after removing most packages 2023-10-10 22:29:25 -04:00
Luke Parker
83b7146e1a Specify shell when removing unused packages 2023-10-10 22:21:03 -04:00
Luke Parker
f193b896c1 Update to monero 0.18.3.1 2023-10-10 21:34:22 -04:00
Luke Parker
985795e99d Remove unused packages as part of build dependencies
The reproducible runtime test failed due to running out of space. If we have
multiple tests failing due to out of space, and all of our tests have these
unused, it makes sense just to always so uninstall.

Also extends the time limit of reproducible-runtime, as 2h has been hit a few
times before.
2023-10-10 21:09:45 -04:00
Luke Parker
9bf8c92325 Correct the coordinator tests
They assumed processor 0 had keys `i = 1`. Under the new validator-set code,
the first key is the one with the highest amount, In case of tie, the key (or
as of the last commit, a Blake hash) decides order.

This commit kludges in a mapping from processor index to assigned key index, no
longer assuming its value.
2023-10-10 17:12:47 -04:00
Luke Parker
ab5af57dae Fix a pair of bugs in SortedAllocations and add further documentation
We took with `amount`, not `prior`, allowing multiple presences in
SortedAllocations.

While spam is limited by the amount of nibbles in the amount, the key provides
a much larger space to abuse. Inserting a cryptographic hash prevents its use
for abuse.
2023-10-10 16:50:48 -04:00
akildemir
98190b7b83 Staking pallet (#373)
* initial staking pallet

* add staking pallet to runtime

* support session rotation for serai

* optimizations & cleaning

* fix deny

* add serai network to initial networks

* a few tweaks & comments

* fix some pr comments

* Rewrite validator-sets with logarithmic algorithms

Uses the fact the underlying DB is sorted to achieve sorting of potential
validators by stake.

Removes release of deallocated stake for now.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-10-10 06:53:24 -04:00
akildemir
2f45bba2d4 fix tendermint invalid commit (#392)
* fix conflicting commit msg signing vs verifying

* fmt
2023-10-10 06:32:04 -04:00
Luke Parker
30d0bad175 Add extra assert to coordinator 2023-10-09 23:38:39 -04:00
Steven Chang
b8abc1e3cc README.md: Add links to Reddit, Telegram and Website 2023-10-07 13:32:52 -04:00
Luke Parker
b2ed2e961c Correct rust version used in CI/orchestration 2023-10-05 18:24:21 -04:00
Luke Parker
9cdca1d3d6 Use the newly stabilized div_ceil
Sets a msrv of 1.73.0.
2023-10-05 14:28:03 -04:00
Luke Parker
4ee65ed243 Update nightly
Supersedes #387.
2023-10-03 01:34:15 -04:00
Luke Parker
aa59f53ead Correct the coordinator tests
They weren't updated with the past couple of commits.
2023-09-29 04:35:02 -04:00
Luke Parker
bd5491dfd5 Simply Coordinator/Processors::send by accepting impl Into *Message 2023-09-29 04:19:59 -04:00
Luke Parker
0eff3d9453 Add Batch messages from processor, verify Batchs published on-chain
Renames Update to SignedBatch.

Checks Batch equality via a hash of the InInstructions. That prevents needing
to keep the Batch in node state or TX introspect.
2023-09-29 03:51:01 -04:00
Luke Parker
0be567ff69 Remove a misplaced copy of a README which has been around for who knows how long 2023-09-29 00:33:14 -04:00
Luke Parker
83b3a5c31c Document how receiving a Processor message does indeed make its Tributary relevant 2023-09-28 20:09:17 -04:00
Luke Parker
4d1212ec65 Update the processor's tests re: Batch SignId key
key used to be empty. As part of implementing support for multisig rotation
into the coordinator, it was easiest to set the key to the substrate key. While
the coordinator and full stack tests were updated, the processor tests weren't.
This does that.
2023-09-27 23:32:29 -04:00
Luke Parker
7e27315207 Attempt to reduce full-stack CI disk usage 2023-09-27 21:00:32 -04:00
Luke Parker
aa1faefe33 Correct Message Queue log statements now that queues are per from-to pairs 2023-09-27 21:00:07 -04:00
Luke Parker
7d738a3677 Start moving Coordinator to a multi-Tributary model
Prior, we only supported a single Tributary per network, and spawned a task to
handled Processor messages per Tributary. Now, we handle Processor messages per
network, yet we still only supported a single Tributary in that handling
function.

Now, when we handle a message, we load the Tributary which is relevant. Once we
know it, we ensure we have it (preventing race conditions), and then proceed.

We do need work to check if we should have a Tributary, or if we're not
participating. We also need to check if a Tributary has been retired, meaning
we shouldn't handle any transactions related to them, and to clean up retired
Tributaries.
2023-09-27 20:49:02 -04:00
Luke Parker
4a32f22418 Use a proper transcript for Tributary scanner topics 2023-09-27 13:33:25 -04:00
Luke Parker
01a4b9e694 Remove unused_variables 2023-09-27 13:00:04 -04:00
Luke Parker
3b01d3039b Remove unused clippy lints from coordinator 2023-09-27 12:42:25 -04:00
Luke Parker
40b7bc59d0 Use dedicated Queues for each from-to pair
Prevents one Processor's message from halting the entire pipeline.
2023-09-27 12:20:57 -04:00
Luke Parker
269db1c4be Remove the "expected" next ID
It's an unnecessary extra layer better handled locally.
2023-09-27 11:13:55 -04:00
Luke Parker
90318d7214 Remove unnecessary TODO 2023-09-27 00:50:57 -04:00
Luke Parker
64d370ac11 Make publish_signed_transaction safe for out of order publications
This is a possibility under the new deterministic nonce scheme.

While there is a concern of us never creating a transaction with a nonce,
blocking everything, we should always create transactions. We'll always publish
preprocesses, and while we'll only publish shares if everyone else does, we
only allocate for shares once everyone else does.
2023-09-27 00:44:31 -04:00
Luke Parker
db8dc1e864 Spawn a task for Heartbeat responses, preventing it from holding up P2P handling 2023-09-27 00:10:37 -04:00
Luke Parker
086458d041 Txn for handling a processor message
handle_processor_messages function added to remove a very large block of nested
code.

MainDb cleaned to never be instantiated.
2023-09-27 00:00:31 -04:00
Luke Parker
2e0f8138e2 Update the coordinator to not handle a processor message multiple times 2023-09-26 23:28:05 -04:00
Luke Parker
32a9a33226 Adjust sync test timeout to resolve infreuqent failure
This isn't an unacceptable timeout. It matches a prior timeout. I'm unsure why
it's now needed to be extended though. My best guess is the test runtime is
single threaded and there's now new overhead in the task management (or perhaps
higher latency now that messages per-tributary is serialized).
2023-09-26 17:28:41 -04:00
Luke Parker
2508633de9 Add a next block notification system to Tributary
Also adds a loop missing from the prior commit.
2023-09-25 23:20:51 -04:00
Luke Parker
7312428a44 P2P task per Tributary, not per message 2023-09-25 22:58:40 -04:00
Luke Parker
e1801b57c9 Dedicated tasks per-Processor in coordinator
This isn't meaningful yet, as we still have serialized reading messages from
Processors, yet is a step closer.
2023-09-25 22:38:29 -04:00
Luke Parker
60491a091f Improve handling of tasks in coordinator, one per Tributary scanner 2023-09-25 20:33:14 -04:00
Luke Parker
9f3840d1cf Localize Tributary HashMaps, offering flexibility and removing contention 2023-09-25 19:28:53 -04:00
Luke Parker
7120bddc6f Move where we trigger DKGs for safety reasons 2023-09-25 18:27:16 -04:00
Luke Parker
77f7794452 Remove lazy_static for proper use of channels 2023-09-25 18:23:52 -04:00
Luke Parker
62a1a45135 Move where we create the readers to prevent only creating readers for present-at-boot tributaries
Also renames publish_transaction to publish_signed_transaction.
2023-09-25 18:07:29 -04:00
Luke Parker
0440e60645 Move heartbeat_tributaries from Tributary to TributaryReader
Reduces contention.
2023-09-25 17:15:38 -04:00
Luke Parker
4babf898d7 Implement deterministic nonces for Tributary transactions 2023-09-25 15:42:39 -04:00
Luke Parker
ca69f97fef Add support for multiple multisigs to the processor (#377)
* Design and document a multisig rotation flow

* Make Scanner::eventualities a HashMap so it's per-key

* Don't drop eventualities, always follow through on them

Technical improvements made along the way.

* Start creating an isolate object to manage multisigs, which doesn't require being a signer

Removes key from SubstrateBlock.

* Move Scanner/Scheduler under multisigs

* Move Batch construction into MultisigManager

* Clarify "should" in Multisig Rotation docs

* Add block_number to MultisigManager, as it controls the scanner

* Move sign_plans into MultisigManager

Removes ThresholdKeys from prepare_send.

* Make SubstrateMutable an alias for MultisigManager

* Rewrite Multisig Rotation

The prior scheme had an exploit possible where funds were sent to the old
multisig, then burnt on Serai to send from the new multisig, locking liquidity
for 6 hours. While a fee could be applied to stragglers, to make this attack
unprofitable, the newly described scheme avoids all this.

* Add mini

mini is a miniature version of Serai, emphasizing Serai's nature as a
collection of independent clocks. The intended use is to identify race
conditions and prove protocols are comprehensive regarding when certain clocks
tick.

This uses loom, a prior candidate for evaluating the processor/coordinator as
free of race conditions (#361).

* Use mini to prove a race condition in the current multisig rotation docs, and prove safety of alternatives

Technically, the prior commit had mini prove the race condition.

The docs currently say the activation block of the new multisig is the block
after the next Batch's. If the two next Batches had already entered the
mempool, prior to set_keys being called, the second next Batch would be
expected to contain the new key's data yet fail to as the key wasn't public
when the Batch was actually created.

The naive solution is to create a Batch, publish it, wait until it's included,
and only then scan the next block. This sets a bound of
`Batch publication time < block time`. Optimistically, we can publish a Batch
in 24s while our shortest block time is 2m. Accordingly, we should be fine with
the naive solution which doesn't take advantage of throughput. #333 may
significantly change latency however and require an algorithm whose throughput
exceeds the rate of blocks created.

In order to re-introduce parallelization, enabling throughput, we need to
define a safe range of blocks to scan without Serai ordering the first one.
mini demonstrates safety of scanning n blocks Serai hasn't acknowledged, so
long as the first is scanned before block n+1 is (shifting the n-block window).

The docs will be updated next, to reflect this.

* Fix Multisig Rotation

I believe this is finally good enough to be final.

1) Fixes the race condition present in the prior document, as demonstrated by
mini.

`Batch`s for block `n` and `n+1`, may have been in the mempool when a
multisig's activation block was set to `n`. This would cause a potentially
distinct `Batch` for `n+1`, despite `n+1` already having a signed `Batch`.

2) Tightens when UIs should use the new multisig to prevent eclipse attacks,
and protection against `Batch` publication delays.

3) Removes liquidity fragmentation by tightening flow/handling of latency.

4) Several clarifications and documentation of reasoning.

5) Correction of "prior multisig" to "all prior multisigs" regarding historical
verification, with explanation why.

* Clarify terminology in mini

Synchronizes it from my original thoughts on potential schema to the design
actually created.

* Remove most of processor's README for a reference to docs/processor

This does drop some misc commentary, though none too beneficial. The section on
scanning, deemed sufficiently beneficial, has been moved to a document and
expanded on.

* Update scanner TODOs in line with new docs

* Correct documentation on Bitcoin::Block::time, and Block::time

* Make the scanner in MultisigManager no longer public

* Always send ConfirmKeyPair, regardless of if in-set

* Cargo.lock changes from a prior commit

* Add a policy document on defining a Canonical Chain

I accidentally committed a version of this with a few headers earlier, and this
is a proper version.

* Competent MultisigManager::new

* Update processor's comments

* Add mini to copied files

* Re-organize Scanner per multisig rotation document

* Add RUST_LOG trace targets to e2e tests

* Have the scanner wait once it gets too far ahead

Also bug fixes.

* Add activation blocks to the scanner

* Split received outputs into existing/new in MultisigManager

* Select the proper scheduler

* Schedule multisig activation as detailed in documentation

* Have the Coordinator assert if multiple `Batch`s occur within a block

While the processor used to have ack_up_to_block, enabling skips in the block
acked, support for this was removed while reworking it for multiple multisigs.
It should happen extremely infrequently.

While it would still be beneficial to have, if multiple `Batch`s could occur
within a block (with the complexity here not being worth adding that ban as a
policy), multiple `Batch`s were blocked for DoS reasons.

* Schedule payments to the proper multisig

* Correct >= to <

* Use the new multisig's key for change on schedule

* Don't report External TXs to prior multisig once deprecated

* Forward from the old multisig to the new one at all opportunities

* Move unfulfilled payments in queue from prior to new multisig

* Create MultisigsDb, splitting it out of MainDb

Drops the call to finish_signing from the Signer. While this will cause endless
re-attempts, the Signer will still consider them completed and drop them,
making this an O(n) cost at boot even if we did nothing from here.

The MultisigManager should call finish_signing once the Scanner completes the
Eventuality.

* Don't check Scanner-emitted completions, trust they are completions

Prevents needing to use async code to mark the completion and creates a
fault-free model. The current model, on fault, would cause a lack of marked
completion in the signer.

* Fix a possible panic in the processor

A shorter-chain reorg could cause this assert to trip. It's fixed by
de-duplicating the data, as the assertion checked consistency. Without the
potential for inconsistency, it's unnecessary.

* Document why an existing TODO isn't valid

* Change when we drop payments for being to the change address

The earlier timing prevents creating Plans solely to the branch address,
causing the payments to be dropped, and the TX to become an effective
aggregation TX.

* Extensively document solutions to Eventualities being potentially created after having already scanned their resolutions

* When closing, drop External/Branch outputs which don't cause progress

* Properly decide if Change outputs should be forward or not when closing

This completes all code needed to make the old multisig have a finite lifetime.

* Commentary on forwarding schemes

* Provide a 1 block window, with liquidity fragmentation risks, due to latency

On Bitcoin, this will be 10 minutes for the relevant Batch to be confirmed. On
Monero, 2 minutes. On Ethereum, ~6 minutes.

Also updates the Multisig Rotation document with the new forwarding plan.

* Implement transaction forwarding from old multisig to new multisig

Identifies a fault where Branch outputs which shouldn't be dropped may be, if
another output fulfills their next step. Locking Branch fulfillment down to
only Branch outputs is not done in this commit, but will be in the next.

* Only let Branch outputs fulfill branches

* Update TODOs

* Move the location of handling signer events to avoid a race condition

* Avoid a deadlock by using a RwLock on a single txn instead of two txns

* Move Batch ID out of the Scanner

* Increase from one block of latency on new keys activation to two

For Monero, this offered just two minutes when our latency to publish a Batch
is around a minute already. This does increase the time our liquidity can be
fragmented by up to 20 minutes (Bitcoin), yet it's a stupid attack only
possible once a week (when we rotate). Prioritizing normal users' transactions
not being subject to forwarding is more important here.

Ideally, we'd not do +2 blocks yet plus `time`, such as +10 minutes, making
this agnostic of the underlying network's block scheduling. This is a
complexity not worth it.

* Split MultisigManager::substrate_block into multiple functions

* Further tweaks to substrate_block

* Acquire a lock on all Scanner operations after calling ack_block

Gives time to call register_eventuality and initiate signing.

* Merge sign_plans into substrate_block

Also ensure the Scanner's lock isn't prematurely released.

* Use a HashMap to pass to-be-forwarded instructions, not the DB

* Successfully determine in ClosingExisting

* Move from 2 blocks of latency when rotating to 10 minutes

Superior as noted in 6d07af92ce10cfd74c17eb3400368b0150eb36d7, now trivial to
implement thanks to prior commit.

* Add note justifying measuring time in blocks when rotating

* Implement delaying of outputs received early to the new multisig per specification

* Documentation on why Branch outputs don't have the race condition concerns Change do

Also ensures 6 hours is at least N::CONFIRMATIONS, for sanity purposes.

* Remove TODO re: sanity checking Eventualities

We sanity check the Plan the Eventuality is derived from, and the Eventuality
is handled moments later (in the same file, with a clear call path). There's no
reason to add such APIs to Eventualities for a sanity check given that.

* Add TODO(now) for TODOs which must be done in this branch

Also deprecates a pair of TODOs to TODO2, and accepts the flow of the Signer
having the Eventuality.

* Correct errors in potential/future flow descriptions

* Accept having a single Plan Vec

Per the following code consuming it, there's no benefit to bifurcating it by
key.

* Only issue sign_transaction on boot for the proper signer

* Only set keys when participating in their construction

* Misc progress

Only send SubstrateBlockAck when we have a signer, as it's only used to tell
the Tributary of what Plans are being signed in response to this block.

Only immediately sets substrate_signer if session is 0.

On boot, doesn't panic if we don't have an active key (as we wouldn't if only
joining the next multisig). Continues.

* Correctly detect and set retirement block

Modifies the retirement block from first block meeting requirements to block
CONFIRMATIONS after.

Adds an ack flow to the Scanner's Confirmed event and Block event to accomplish
this, which may deadlock at this time (will be fixed shortly).

Removes an invalid await (after a point declared unsafe to use await) from
MultisigsManager::next_event.

* Remove deadlock in multisig_completed and document alternative

The alternative is simpler, albeit less efficient. There's no reason to adopt
it now, yet perhaps if it benefits modeling?

* Handle the final step of retirement, dropping the old key and setting new to existing

* Remove TODO about emitting a Block on every step

If we emit on NewAsChange, we lose the purpose of the NewAsChange period.

The only concern is if we reach ClosingExisting, and nothing has happened, then
all coins will still be in the old multisig until something finally does. This
isn't a problem worth solving, as it's latency under exceptional dead time.

* Add TODO about potentially not emitting a Block event for the reitrement block

* Restore accidentally deleted CI file

* Pair of slight tweaks

* Add missing if statement

* Disable an assertion when testing

One of the test flows currently abuses the Scanner in a way triggering it.
2023-09-25 09:48:15 -04:00
Luke Parker
fe19e8246e cargo update
Updates past the yanked rustls-webpki, reduces tree size by one.
2023-09-24 08:31:13 -04:00
Luke Parker
c62d9b448f Use a Vec for the Monero generators, preventing its massive stack usage
The amount of stack usage did cause issues on m1 computers.
2023-09-20 04:31:16 -04:00
Luke Parker
98ab6acbd5 cargo update, removing 5 items from tree 2023-09-20 04:30:46 -04:00
Luke Parker
092f17932a Document requirement on rootless Docker 2023-09-19 12:59:04 -04:00
Luke Parker
e455332e01 Resolve #369 2023-09-19 12:55:30 -04:00
Luke Parker
a9468bf355 Pin CI from stable to 1.72.1
Enables better detection of regressions in Rust, a few of which 1.72.1 fixes.
2023-09-19 11:43:21 -04:00
Luke Parker
3d464c4736 apt update before install in workflow 2023-09-15 14:54:55 -04:00
Luke Parker
142552f024 Correct workflows with missing toolchain annotations 2023-09-15 14:36:47 -04:00
Luke Parker
e3a7ee4927 Pin to exact GH actions, preventing ACE in CI
Also updates actions.
2023-09-15 14:30:18 -04:00
Luke Parker
9eaaa7d2e8 Document H1's mismatch between the FROST preprint and IETF draft 2023-09-15 14:16:15 -04:00
Luke Parker
8adef705c3 Update wasmtime due to CVE-2023-41880 2023-09-15 14:06:39 -04:00
Luke Parker
3fd6d45b3e Use base58-monero 2, removing a git dependency 2023-09-15 13:59:29 -04:00
Luke Parker
d263413e36 Fixes for schnorrkel/dalek updates 2023-09-12 10:02:20 -04:00
Luke Parker
6f8a5d0ede Sane char_le_bits 2023-09-12 09:37:48 -04:00
Luke Parker
24bdd7ed9b Bump dalek-ff-group version
Prior commit fixed random, which could generate points outside of the prime
subgroup.
2023-09-12 09:00:42 -04:00
Luke Parker
aa724c06bc Start relying on curve25519-dalek's group feature
Removes git dependency for schnorrkel as well, now that schnorrkel has updated.
2023-09-12 08:56:30 -04:00
Luke Parker
1e6655408e cargo update
Bites the bullet on ethers 2.0.9 (now 2.0.10).
2023-09-12 07:47:03 -04:00
Luke Parker
9ab43407e1 Ignore NewSet events for Serai in the coordinator
The coordinator has nothing to do in this case.
2023-09-08 09:55:19 -04:00
Luke Parker
7ac0de3a8d Correct binding properties of Bitcoin eventuality
Eventualities need to be binding not just to a plan, yet to the execution of
the plan (the outputs). Bitcoin's Eventuality definition short-cutted this
under a honest multisig assumption, causing the following issue:

If multisig n+1 is verifying multisig n's actions, as detailed in
multi-multisig's document on multisig rotation, it'll check no outstanding
eventualities exist. If we solely bind to the plan, a malicious multisig n
could steal outbound payments yet cause the plan to be marked as successfully
completed.

By modifying the eventuality to also include the expected outputs, this is no
longer possible. Binding to the expected input is preserved in order to remain
binding to the plan (allowing two plans with the same output-set to co-exist).
2023-09-08 05:21:18 -04:00
Luke Parker
06a6cd29b0 Set nodelay on coordinator's P2P sockets 2023-09-06 22:57:33 -04:00
Luke Parker
2472ec7ba8 Don't attempt parsing truncated InInstructions 2023-09-02 17:18:04 -04:00
Luke Parker
69c3fad7ce cargo fmt 2023-09-02 16:32:42 -04:00
Luke Parker
bd9a05feef Document UTXO solvency modeling 2023-09-02 16:11:01 -04:00
Luke Parker
7d8e08d5b4 Use scale instead of bincode throughout processor-messages/processor DB
scale is canonical, bincode is not.
2023-09-02 07:54:09 -04:00
Luke Parker
f7e49e1f90 Update Rust nightly
Supersedes #368.

Adds exceptions for unwrap_or_default due to preference against Default's
ambiguity.
2023-09-02 01:24:09 -04:00
Luke Parker
cd4c3a6c88 Correct publication of Completed Tributary TXs 2023-09-02 00:50:54 -04:00
Luke Parker
fddc605c65 Define a proper Topic (Zone + ID)
Removes the inefficiency of recognized_topic for attempt returning None if the
topic isn't recognized.
2023-09-01 01:21:15 -04:00
Luke Parker
2ad6b38be9 Prefix root keys in coordinator with "coordinator" to prevent conflicts with tributary 2023-09-01 01:00:24 -04:00
Luke Parker
fda90e23c9 Reduce and clarify data flow in Tributary scanner 2023-09-01 00:59:10 -04:00
Luke Parker
3f3f6b2d0c Properly route attempt around DkgConfirmer 2023-09-01 00:16:43 -04:00
Luke Parker
fa8ff62b09 Remove sender_i from DkgShares
It was a piece of duplicated data used to achieve context-less
de)serialization. This new Vec code is a bit tricker to first read, yet overall
clean and removes a potential fault.

Saves 2 bytes from DkgShares messages.
2023-09-01 00:03:56 -04:00
Luke Parker
5113ab9ec4 Move SignCompleted to Unsigned to cause de-duplication amongst honest validators 2023-08-31 23:39:36 -04:00
Luke Parker
9b7cb688ed Have Batch contain Block and batch ID, ensuring eclipsed validators don't publish invalid shares
See prior commit message for more info.

With the plan for the batch sign ID to be just 5 bytes (potentially 4), this
does incur a +5 bytes cost compared to the ExternalBlock system *even in the
standard case*. The simplicity remains preferred at this time.
2023-08-31 23:04:39 -04:00
Luke Parker
9a5f8fc5dd Replace ExternalBlock with Batch
The initial TODO was simply to use one ExternalBlock per all batches in the
block. This would require publishing ExternalBlock after the last batch,
requiring knowing the last batch. While we could add such a pipeline, it'd
require:

1) Initial preprocesses using a distinct message from BatchPreprocess
2) An additional message sent after all BatchPreprocess are sent

Unfortunately, both would require tweaks to the SubstrateSigner which aren't
worth the complexity compared to the solution here, at least, not at this time.

While this will cause, if a Tributary is signing a block whose total batch data
exceeds 25 kB, to use multiple transactions which could be optimized out by
'better' local data pipelining, that's an extreme edge case. Given the temporal
nature of each Tributary, it's also an acceptable edge.

This does no longer achieve synchrony over external blocks accordingly. While
signed batches have synchrony, as they embed their block hash, batches being
signed don't have cryptographic synchrony on their contents. This means
validators who are eclipsed may produce invalid shares, as they sign a
different batch. This will be introduced in a follow-up commit.
2023-08-31 23:00:25 -04:00
Luke Parker
2dc35193c9 Handle batch n+1 being signed before batch n is 2023-08-31 22:09:34 -04:00
Luke Parker
9bf24480f4 Spawn an async test per P2P message to try and resolve latency issues 2023-08-31 02:35:50 -04:00
Luke Parker
3af9dc5d6f Tweak Heartbeat configuration so LibP2P can be expected to deliver messages within latency window 2023-08-31 01:33:52 -04:00
Luke Parker
148bc380fe Add assert on edge case requiring a validator with 34% and a broken invariant 2023-08-31 01:08:40 -04:00
Luke Parker
8dad62f300 Fix panic when post-verifying Precommits in log
Notes an edge case which enables invalid commit production.
2023-08-31 01:02:57 -04:00
Luke Parker
1e79de87e8 Remove contention between LibP2p spawned task and consumers via channels 2023-08-30 23:31:09 -04:00
Luke Parker
2f57a69cb6 Define BLOCK_PROCESSING_TIME, LATENCY_TIME in ms
Updates Tributary values to allow 999ms for block processing (from 2s) and
1667ms for latency (up from 1s).

The intent is to resolve #365. I don't know if this will, but it increases the
chances of success and these values should be fine in prod since Tributary is a
post-execution chain (making block procesisng time minimal).

Does embed the dagger of N::block_time() panicking if the block time in ms
doesn't cleanly divide by 1000.
2023-08-30 22:58:42 -04:00
Luke Parker
493a222421 Use a timeout in case the JSON-RPC notifications have unexpected behavior 2023-08-30 17:57:33 -04:00
Luke Parker
d5a19eca8c Add a notification system for finalizations to serai-client, use in coordinator 2023-08-30 17:25:04 -04:00
Luke Parker
e9fca37181 Intent based de-duplication in MessageQueue 2023-08-29 17:05:01 -04:00
Luke Parker
83c25eff03 Remove no longer necessary async from monero SignatableTransaction::sign 2023-08-29 16:20:21 -04:00
Luke Parker
285422f71a Add a full-stack mint and burn test for Bitcoin and Monero
Fixes where ram_scanned is updated in processor. The prior version, while safe,
would redo massive amounts of work during periods of inactivity. It also hit an
undocumented invariant where get_eventuality_completions assumes new blocks,
yet redone work wouldn't have new blocks.

Modifies Monero's generate_blocks to return the hashes of the generated blocks.
2023-08-28 21:17:22 -04:00
Luke Parker
1838c37ecf Full stack test framework 2023-08-27 18:37:12 -04:00
Luke Parker
a3649b2062 Move where we trigger KeyGen to avoid a race condition
We only expect processor messages when we have the relevant Tributary. We
queued Tributary creation, yet then kicked off processor messages. We need to
wait until the Tributary is actually created to kick off processor messages.
2023-08-27 18:23:45 -04:00
Luke Parker
1bd14163a0 Log all output to console when running in CI
Attempts to debug https://github.com/serai-dex/serai/actions/runs/5992819682/job/16252622433,
which seems to be a legitimate test failure.
2023-08-27 17:51:36 -04:00
Luke Parker
89a6ee9290 Silence warning when building in release 2023-08-27 15:39:09 -04:00
Luke Parker
a66994aade Use FCMP implementation of BP+ in monero-serai (#344)
* Add in an implementation of BP+ based off the paper, intended for clarity and review

This was done as part of my work on FCMPs from Monero, and is copied from https://github.com/kayabaNerve/full-chain-membership-proofs

* Remove crate structure of BP+

* Remove arithmetic circuit code

* Remove AC/VC generators code

* Remove generator transcript

Monero uses non-transcripted static generators.

* Further trimming of generators

* Remove the single range proof

It's unused by Monero and accordingly unhelpful.

* Work on getting BP+ to compile in its new env

* Correct BP+ folder name

* Further tweaks to get closer to compiling

* Remove the ScalarMatrix file

It's only used for AC proofs

* Compiles, with tests passing

* Lock BP+ to Ed25519 instead of the generic Ciphersuite

* Resolve most warnings in BP+

* Make existing bulletproofs test easier to read

* Further strip generators

* Swap G/H as Monero did

* Replace RangeCommitment with Commitment

* Hard-code BP+ h to Ed25519's generator

* Use pub(crate) for BP+, not pub

* Replace initial_transcript with hash_plus

* Rename hash_plus to initial_transcript

* Finish integrating the FCMP BP+ impl

* Move BP+ folder

* Correct no-std support

* Rename "long_n" to eta

* Add note on non-prime order dfg points
2023-08-27 15:33:17 -04:00
Luke Parker
34ffd2fa76 Run coordinator e2e tests one at a time due to usage of mdns causing cross-talk 2023-08-27 05:40:36 -04:00
Luke Parker
775353f8cd Add testing for Completed to coordinator e2e tests
Two commits ago stubbed in support, sufficient for a honest-validator
environment.
2023-08-27 05:34:04 -04:00
Luke Parker
c9b2490ab9 Tweak tributary_test to handle a one-block variance
Prior to the previous commit, whatever async scheduling occurred caused them to
all have the same tip. Now, some are one block ahead of others. This adds
tolerance for that, as it's an acceptable variance, so long as it's solely one
block.
2023-08-27 05:28:36 -04:00
Luke Parker
2db53d5434 Use &self for handle_message and sync_block in Tributary
They used &mut self to prevent execution at the same time. This uses a lock
over the channel to achieve the same security, without requiring a lock over
the entire tributary.

This fixes post-provided Provided transactions. sync_block waited for the TX to
be provided, yet it never would as sync_block held a mutable reference over the
entire Tributary, preventing any other read/write operations of any scope.

A timeout increased (bc2f23f72b) due to this bug
not being identified has been decreased back, thankfully.

Also shims in basic support for Completed, which was the WIP before this bug
was identified.
2023-08-27 05:07:11 -04:00
Luke Parker
6268bbd7c8 Replace "connected" with "external" in Processor doc 2023-08-27 01:57:17 -04:00
Luke Parker
bc2f23f72b Further increase GH timeout in response to https://github.com/serai-dex/serai/actions/runs/5988202109/job/16243154650
This should be egregious unless the GitHub CI is so inperformant it's breaking
Tendermint consensus's synchrony expectations, which likely points to our own
code being unviable.

This solely serves as an immediate fix to the problem, not a justification of
the unevaluated performance.
2023-08-27 01:26:42 -04:00
Luke Parker
72337b17f5 Test the Sign flow across the Coordinators 2023-08-26 21:45:53 -04:00
Luke Parker
36b193992f Correct handling of known signers in batch test
Putting a signer first doesn't work because signers can only publish once a
supermajority sync. Now, the code uses an excluded signer (instead of an
included signer) to determine signing set.

Further simplifications are available. Also adds accurate documentation on
latency/sleep reasoning.
2023-08-26 21:38:58 -04:00
Luke Parker
37b6de9c0c Add SRI balanace/transfer functions to serai_client 2023-08-26 21:37:54 -04:00
Luke Parker
22f3c9e58f Stop trying to publish a Batch if another node does 2023-08-26 21:37:21 -04:00
Luke Parker
9adefa4c2c Add code to handle a race condition around first_preprocess 2023-08-26 21:35:43 -04:00
Luke Parker
c245bcdc9b Extend Batch test with SubstrateBlock/arb Batch signing 2023-08-25 21:37:36 -04:00
Luke Parker
f249e20028 Route KeyPair so Tributary can construct SignId as needed 2023-08-25 18:37:22 -04:00
Luke Parker
3745f8b6af Increase GitHub CI timeouts for the coordinator tests 2023-08-25 14:09:44 -04:00
Luke Parker
ba46aa76f0 Update libp2p
Adds 17 new crates, which I'm extremely unhappy about. Unfortunately, it's
needed to resolve a security issue (RUSTSEC-2023-0052) and is inevitable.

Closes #355.
2023-08-25 00:01:58 -04:00
Luke Parker
8c1d8a2658 Only emit Preprocesses/Shares when participating 2023-08-24 23:50:19 -04:00
Luke Parker
2702384c70 Coordinator Batch signing test 2023-08-24 23:48:50 -04:00
Luke Parker
d3093c92dc Uncontroversial cargo update 2023-08-24 22:06:08 -04:00
Luke Parker
32df302cc4 Move recognized_id from a channel to an async lambda
Fixes a race condition. Also fixes recognizing batch IDs.
2023-08-24 21:55:59 -04:00
Luke Parker
ea8e26eca3 Use an empty key for Batch's SignId 2023-08-24 20:39:34 -04:00
Luke Parker
bccdabb53d Use a single Substrate signer, per intentions in #227
Removes key from Update as well, since it's no longer variable.
2023-08-24 20:30:50 -04:00
Luke Parker
b91bd44476 Support multiple batches per block by the coordinator
Also corrects an assumption block hash == batch ID.
2023-08-24 19:13:18 -04:00
Luke Parker
dc2656a538 Don't bind to the entire batch, solely the network and ID
This avoids needing to know the Batch in advance, avoiding a race condition.
2023-08-24 18:52:33 -04:00
Luke Parker
67109c648c Use an actual, cryptographically-binding ID for batches in SignId
The intent system expected one.
2023-08-24 18:44:09 -04:00
Luke Parker
61418b4e9f Update Update and substrate_signers to [u8; 32] from Vec<u8>
A commit made while testing moved them from network-key-indexed to
Substrate-key-indexed. Since Substrate keys have a fixed-length, fitting within
the Copy boundary, there's no reason for it to not use an array.
2023-08-24 13:24:56 -04:00
Luke Parker
506ded205a Misc cargo update
Removes 5 crates (presumably duplicated versions).
2023-08-23 13:24:21 -04:00
Luke Parker
9801f9f753 Update to latest Serai substrate (removes statement store) 2023-08-23 13:21:17 -04:00
Luke Parker
8a4ef3ddae jsonrpsee 0.16.3
Removes 3 crates from tree. Now RUSTSEC-2023-0053 is only held up by a lack of
libp2p-websocket release (https://github.com/libp2p/rust-libp2p/pull/4378).
2023-08-23 13:18:51 -04:00
Luke Parker
bfb5401336 Handle v1 Monero TXs spending v2 outputs
Also tightens read/fixes a few potential panics.
2023-08-22 23:26:59 -04:00
Luke Parker
f2872a2e07 Silence RUSTSEC tracked by https://github.com/serai-dex/serai/355
Since it isn't resolvable immediately, it's better to track it than effectively
neuter the deny job (as it would always error).
2023-08-22 22:15:13 -04:00
Luke Parker
c739f00d0b cargo update
1) Updates to time 0.3.27, which allows modern serde_derive's (post binary
fiasco)
2) Updates rustls-webpki to a version not affected by RUSTSEC-2023-0053
3) Updates wasmtime to 12
(d5923df083)
2023-08-22 22:06:06 -04:00
Luke Parker
310a09b5a4 clippy fix 2023-08-22 01:00:18 -04:00
Luke Parker
c65bb70741 Remove SlashVote per #349 2023-08-21 23:48:33 -04:00
Luke Parker
718c44d50e cargo update
One update replaces one package with another.

The Substrate pulls get ed25519-zebra (still in tree) to dalek 4.0. While noise
still pulls in 3.2, for now, this does let us drop one crate.
2023-08-21 22:58:45 -04:00
Luke Parker
1f45c2c6b5 cargo fmt 2023-08-21 10:40:10 -04:00
Luke Parker
76a30fd572 Support no-std builds of bitcoin-serai
Arguably not meaningful, as it adds the scanner yet not the RPC, and no signing
code since modular-frost doesn't support no-std yet. It's a step in the right
direction though.
2023-08-21 08:56:37 -04:00
Luke Parker
a52c86ad81 Don't label Monero nodes invalid for returning invalid keys in outputs
Only recently (I believe the most recent HF) were output keys checked to be
valid. This means returned keys may be invalid points, despite being the
legitimate keys for the specified outputs.

Does still label the node as invalid if it doesn't return 32 bytes,
hex-encoded.
2023-08-21 03:07:20 -04:00
Luke Parker
108504d6e2 Increase MAX_ADDRESS_LEN
In response to
https://gist.github.com/tevador/50160d160d24cfc6c52ae02eb3d17024?permalink_comment_id=4665372#gistcomment-4665372
2023-08-21 02:56:10 -04:00
Luke Parker
27cd2ee2bb cargo fmt 2023-08-21 02:38:27 -04:00
Luke Parker
dc88b29b92 Add keep-alive timeout to coordinator
The Heartbeat was meant to serve for this, yet no Heartbeats are fired when we
don't have active tributaries.

libp2p does offer an explicit KeepAlive protocol, yet it's not recommended in
prod. While this likely has the same pit falls as LibP2p's KeepAlive protocol,
it's at least tailored to our timing.
2023-08-21 02:36:03 -04:00
Luke Parker
45ea805620 Use an optimistic (and twice as long in the worst case) sleep for KeyGen
Possible due to Substrate having an RPC.
2023-08-21 02:31:22 -04:00
Luke Parker
1e68cff6dc Bump bitcoin-serai to 0.3.0 for publication 2023-08-21 01:26:54 -04:00
Luke Parker
906d3b9a7c Merge pull request #348 from serai-dex/current-crypto-crates
Current crypto crates
2023-08-21 01:24:16 -04:00
akildemir
e319762c69 use half-aggregation for tm messages (#346)
* dalek 4.0

* cargo update

Moves to a version of Substrate which uses curve25519-dalek 4.0 (not a rc).
Doesn't yet update the repo to curve25519-dalek 4.0 (as a branch does) due
to the official schnorrkel using a conflicting curve25519-dalek. This would
prevent installation of frost-schnorrkel without a patch.

* use half-aggregation for tm messages

* fmt

* fix pr comments

* cargo update

Achieves three notable updates.

1) Resolves RUSTSEC-2022-0093 by updating libp2p-identity.
2) Removes 3 old rand crates via updating ed25519-dalek (a dependency of
libp2p-identity).
3) Sets serde_derive to 1.0.171 via updating to time 0.3.26 which pins at up to
1.0.171.

The last one is the most important. The former two are niceties.

serde_derive, since 1.0.171, ships a non-reproducible binary blob in what's a
complete compromise of supply chain security. This is done in order to reduce
compile times, yet also for the maintainer of serde (dtolnay) to leverage
serde's position as the 8th most downloaded crate to attempt to force changes
to the Rust build pipeline.

While dtolnay's contributions to Rust are respectable, being behind syn, quote,
and proc-macro2 (the top three crates by downloads), along with thiserror,
anyhow, async-trait, and more (I believe also being part of the Rust project),
they have unfortunately decided to refuse to listen to the community on this
issue (or even engage with counter-commentary). Given their political agenda
they seem to try to be accomplishing with force, I'd go as far as to call their
actions terroristic (as they're using the threat of the binary blob as
justification for cargo to ship 'proper' support for binary blobs).

This is arguably representative of dtolnay's past work on watt. watt was a wasm
interpreter to execute a pre-compiled proc macro. This would save the compile
time of proc macros, yet sandbox it so a full binary did not have to be run.

Unfortunately, watt (while decreasing compile times) fails to be a valid
solution to supply chain security (without massive ecosystem changes). It never
implemented reproducible builds for its wasm blobs, and a malicious wasm blob
could still fundamentally compromise a project. The only solution for an end
user to achieve a secure pipeline would be to locally build the project,
verifying the blob aligns, yet doing so would negate all advantages of the
blob.

dtolnay also seems to be giving up their role as a FOSS maintainer given that
serde no longer works in several environments. While FOSS maintainers are not
required to never implement breaking changes, the version number is still 1.0.
While FOSS maintainers are not required to follow semver, releasing a very
notable breaking change *without a new version number* in an ecosystem which
*does follow semver*, then refusing to acknowledge bugs as bugs with their work
does meet my personal definition of "not actively maintaining their existing
work". Maintenance would be to fix bugs, not introduce and ignore.

For now, serde > 1.0.171 has been banned. In the future, we may host a fork
without the blobs (yet with the patches). It may be necessary to ban all of
dtolnay's maintained crates, if they continue to force their agenda as such,
yet I hope this may be resolved within the next week or so.

Sources:

https://github.com/serde-rs/serde/issues/2538 - Binary blob discussion

This includes several reports of various workflows being broken.

https://github.com/serde-rs/serde/issues/2538#issuecomment-1682519944

dtolnay commenting that security should be resolved via Rust toolchain edits,
not via their own work being secure. This is why I say they're trying to
leverage serde in a political game.

https://github.com/serde-rs/serde/issues/2526 - Usage via git broken

dtolnay explicitly asks the submitting user if they'd be willing to advocate
for changes to Rust rather than actually fix the issue they created. This is
further political arm wrestling.

https://github.com/serde-rs/serde/issues/2530 - Usage via Bazel broken

https://github.com/serde-rs/serde/issues/2575 - Unverifiable binary blob

https://github.com/dtolnay/watt - dtolnay's prior work on precompilation

* add Rs() api to  SchnorrAggregate

* Correct serai-processor-tests to dalek 4

* fmt + deny

* Slash malevolent validators  (#294)

* add slash tx

* ignore unsigned tx replays

* verify that provided evidence is valid

* fix clippy + fmt

* move application tx handling to another module

* partially handle the tendermint txs

* fix pr comments

* support unsigned app txs

* add slash target to the votes

* enforce provided, unsigned, signed tx ordering within a block

* bug fixes

* add unit test for tendermint txs

* bug fixes

* update tests for tendermint txs

* add tx ordering test

* tidy up tx ordering test

* cargo +nightly fmt

* Misc fixes from rebasing

* Finish resolving clippy

* Remove sha3 from tendermint-machine

* Resolve a DoS in SlashEvidence's read

Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.

* Make lazy_static a dev-depend for tributary

* Various small tweaks

One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).

The extra protection added here sorts signed, then concats.

* Fix Tributary tests I broke, start review on tendermint/tx.rs

* Finish reviewing everything outside tests and empty_signature

* Remove empty_signature

empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.

We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.

This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.

* Remove async_sequential for tokio::test

There was no competition for resources forcing them to be run sequentially.

* Modify block order test to be statistically significant without multiple runs

* Clean tests

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>

* Add DSTs to Tributary TX sig_hash functions

Prevents conflicts with other systems/other parts of the Tributary.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-21 01:22:00 -04:00
Luke Parker
f161135119 Add coins/bitcoin audit by Cypher Stack 2023-08-21 01:20:09 -04:00
Luke Parker
d2a0ff13f2 Merge branch 'bitcoin-audit' into develop 2023-08-21 01:16:50 -04:00
Luke Parker
498aa45619 Ban only versions of serde with binary blobs
Does downgrade time from 0.3.26 to 0.3.25 due to time banning > 1.0.171.
Hopefully that's also relaxed soon...
2023-08-21 00:57:22 -04:00
akildemir
39ce819876 Slash malevolent validators (#294)
* add slash tx

* ignore unsigned tx replays

* verify that provided evidence is valid

* fix clippy + fmt

* move application tx handling to another module

* partially handle the tendermint txs

* fix pr comments

* support unsigned app txs

* add slash target to the votes

* enforce provided, unsigned, signed tx ordering within a block

* bug fixes

* add unit test for tendermint txs

* bug fixes

* update tests for tendermint txs

* add tx ordering test

* tidy up tx ordering test

* cargo +nightly fmt

* Misc fixes from rebasing

* Finish resolving clippy

* Remove sha3 from tendermint-machine

* Resolve a DoS in SlashEvidence's read

Also moves Evidence from Vec<Message> to (Message, Option<Message>). That
should meet all requirements while being a bit safer.

* Make lazy_static a dev-depend for tributary

* Various small tweaks

One use of sort was inefficient, sorting unsigned || signed when unsigned was
already properly sorted. Given how the unsigned TXs were given a nonce of 0, an
unstable sort may swap places with an unsigned TX and a signed TX with a nonce
of 0 (leading to a faulty block).

The extra protection added here sorts signed, then concats.

* Fix Tributary tests I broke, start review on tendermint/tx.rs

* Finish reviewing everything outside tests and empty_signature

* Remove empty_signature

empty_signature led to corrupted local state histories. Unfortunately, the API
is only sane with a signature.

We now use the actual signature, which risks creating a signature over a
malicious message if we have ever have an invariant producing malicious
messages. Prior, we only signed the message after the local machine confirmed
it was okay per the local view of consensus.

This is tolerated/preferred over a corrupt state history since production of
such messages is already an invariant. TODOs are added to make handling of this
theoretical invariant further robust.

* Remove async_sequential for tokio::test

There was no competition for resources forcing them to be run sequentially.

* Modify block order test to be statistically significant without multiple runs

* Clean tests

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-21 00:28:23 -04:00
Luke Parker
8973eb8ac4 fmt + deny 2023-08-20 00:14:53 -04:00
Luke Parker
34397b31b1 Correct serai-processor-tests to dalek 4 2023-08-19 16:34:27 -04:00
Luke Parker
96583da3b9 cargo update
Achieves three notable updates.

1) Resolves RUSTSEC-2022-0093 by updating libp2p-identity.
2) Removes 3 old rand crates via updating ed25519-dalek (a dependency of
libp2p-identity).
3) Sets serde_derive to 1.0.171 via updating to time 0.3.26 which pins at up to
1.0.171.

The last one is the most important. The former two are niceties.

serde_derive, since 1.0.171, ships a non-reproducible binary blob in what's a
complete compromise of supply chain security. This is done in order to reduce
compile times, yet also for the maintainer of serde (dtolnay) to leverage
serde's position as the 8th most downloaded crate to attempt to force changes
to the Rust build pipeline.

While dtolnay's contributions to Rust are respectable, being behind syn, quote,
and proc-macro2 (the top three crates by downloads), along with thiserror,
anyhow, async-trait, and more (I believe also being part of the Rust project),
they have unfortunately decided to refuse to listen to the community on this
issue (or even engage with counter-commentary). Given their political agenda
they seem to try to be accomplishing with force, I'd go as far as to call their
actions terroristic (as they're using the threat of the binary blob as
justification for cargo to ship 'proper' support for binary blobs).

This is arguably representative of dtolnay's past work on watt. watt was a wasm
interpreter to execute a pre-compiled proc macro. This would save the compile
time of proc macros, yet sandbox it so a full binary did not have to be run.

Unfortunately, watt (while decreasing compile times) fails to be a valid
solution to supply chain security (without massive ecosystem changes). It never
implemented reproducible builds for its wasm blobs, and a malicious wasm blob
could still fundamentally compromise a project. The only solution for an end
user to achieve a secure pipeline would be to locally build the project,
verifying the blob aligns, yet doing so would negate all advantages of the
blob.

dtolnay also seems to be giving up their role as a FOSS maintainer given that
serde no longer works in several environments. While FOSS maintainers are not
required to never implement breaking changes, the version number is still 1.0.
While FOSS maintainers are not required to follow semver, releasing a very
notable breaking change *without a new version number* in an ecosystem which
*does follow semver*, then refusing to acknowledge bugs as bugs with their work
does meet my personal definition of "not actively maintaining their existing
work". Maintenance would be to fix bugs, not introduce and ignore.

For now, serde > 1.0.171 has been banned. In the future, we may host a fork
without the blobs (yet with the patches). It may be necessary to ban all of
dtolnay's maintained crates, if they continue to force their agenda as such,
yet I hope this may be resolved within the next week or so.

Sources:

https://github.com/serde-rs/serde/issues/2538 - Binary blob discussion

This includes several reports of various workflows being broken.

https://github.com/serde-rs/serde/issues/2538#issuecomment-1682519944

dtolnay commenting that security should be resolved via Rust toolchain edits,
not via their own work being secure. This is why I say they're trying to
leverage serde in a political game.

https://github.com/serde-rs/serde/issues/2526 - Usage via git broken

dtolnay explicitly asks the submitting user if they'd be willing to advocate
for changes to Rust rather than actually fix the issue they created. This is
further political arm wrestling.

https://github.com/serde-rs/serde/issues/2530 - Usage via Bazel broken

https://github.com/serde-rs/serde/issues/2575 - Unverifiable binary blob

https://github.com/dtolnay/watt - dtolnay's prior work on precompilation
2023-08-19 01:50:38 -04:00
Luke Parker
34c6974311 Merge branch 'dalek-4.0' into develop 2023-08-17 02:00:36 -04:00
Luke Parker
5a4db3efad cargo update
Moves to a version of Substrate which uses curve25519-dalek 4.0 (not a rc).
Doesn't yet update the repo to curve25519-dalek 4.0 (as a branch does) due
to the official schnorrkel using a conflicting curve25519-dalek. This would
prevent installation of frost-schnorrkel without a patch.
2023-08-17 01:25:50 -04:00
Luke Parker
28399b0310 cargo update 2023-08-15 05:39:23 -04:00
Luke Parker
426e89d6fb Extend sleeps of coordinator CI tests
The CI does now get past the first check to the second one, hence the addition
of similar sleeps to the second and so on checks.
2023-08-15 05:31:06 -04:00
Luke Parker
4850376664 Try to fix coordinator CI by sleeping longer when in CI 2023-08-14 16:02:14 -04:00
Luke Parker
f366d65d4b Add RUST_BACKTRACE=1 to .github
Should help resolve the coordinator tests, which are failing in the current CI.
Presumably a timeout which is too low for the CI's slower runners.
2023-08-14 11:59:26 -04:00
akildemir
e680eabb62 Improve batch handling (#316)
* restrict batch size to ~25kb

* add batch size check to node

* rate limit batches to 1 per serai block

* add support for multiple batches for block

* fix review comments

* Misc fixes

Doesn't yet update tests/processor until data flow is inspected.

* Move the block from SignId to ProcessorMessage::BatchPreprocesses

* Misc clean up

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-08-14 11:57:38 -04:00
Luke Parker
a3441a6871 Don't have publish return the 'hash' 2023-08-14 08:18:19 -04:00
Luke Parker
9895ec0f41 Add note to processor 2023-08-14 06:54:34 -04:00
Luke Parker
666bb3e96b E2E test coordinator KeyGen 2023-08-14 06:54:17 -04:00
Luke Parker
acc19e2817 Stop attempting to call set_keys if another validator does
This prevents this function from hanging ad-infinitum.
2023-08-14 06:53:23 -04:00
Luke Parker
5e02f936e4 Perform MuSig signing of generated keys 2023-08-14 06:08:55 -04:00
Luke Parker
6b41c91dc2 Correct TendermintMachine sleep on construction
For logging purposes, I added code to handle negative time till start. I forgot
to only sleep on positive time till start.

Should fix the recent CI failure.
2023-08-13 05:57:14 -04:00
Luke Parker
8992504e69 Correct Dockerfile with typo 2023-08-13 05:11:49 -04:00
Luke Parker
e2901cab06 Revert round-advance on TendermintMachine::new if local clock is ahead of block start
It was improperly implemented, as it assumed rounds had a constant time
interval, which they do not. It also is against the spec and was meant to
absolve us of issues with poor performance when post-starting blockchains. The
new, and much more proper, workaround for the latter is a 120-second delay
between the Substrate time and the Tributary start time.
2023-08-13 04:35:46 -04:00
Luke Parker
13a8b0afc1 Add panic-handlers which exit on any panic
By default, tokio-spawned worker panics will only kill the task, not the
program. Due to our extensive use of panicking on invariants, we should ensure
the program exits.
2023-08-13 04:30:49 -04:00
Luke Parker
7e71450dc4 Bug fixes and log statements
Also shims next nonce code with a fine-for-now piece of code which is unviable
in production, yet should survive testnet.
2023-08-13 04:03:59 -04:00
Luke Parker
049fefb5fd Move apt calls in Docker to enable caching 2023-08-12 22:51:12 -04:00
Luke Parker
6a89cfcd08 Update to further-pruned substrate 2023-08-09 05:00:44 -04:00
Luke Parker
4571a8ad85 cargo update 2023-08-08 20:24:23 -04:00
Luke Parker
96db784b10 Increase the reproducible-runtime timeout as it was getting hit in CI 2023-08-08 18:36:31 -04:00
Luke Parker
dd523b22c2 Correct transcript minimum version requirements 2023-08-08 18:32:13 -04:00
Luke Parker
fa406c507f Update crypto/ package versions
On a branch while bitcoin-serai wraps up its audit.
2023-08-08 18:19:01 -04:00
Luke Parker
ad51c123e3 Fix Tendermint bug from prior commit 2023-08-08 16:33:09 -04:00
Luke Parker
f6f945e747 Add a LibP2P instantiation to coordinator
It's largely unoptimized, and not yet exclusive to validators, yet has basic
sanity (using message content for ID instead of sender + index).

Fixes bugs as found. Notably, we used a time in milliseconds where the
Tributary expected  seconds.

Also has Tributary::new jump to the presumed round number. This reduces slashes
when starting new chains (whose times will be before the current time) and was
the only way I was able to observe successful confirmations given current
surrounding infrastructure.
2023-08-08 15:12:47 -04:00
Luke Parker
0dd8aed134 Expand cluster-sm/local testnet to 4 validators for BFT where f=1 2023-08-06 13:42:16 -04:00
Luke Parker
cee788eac3 Test the Coordinator emits KeyGen
Mainly just a test that the full stack is properly set up and we've hit basic
functioning for further testing.
2023-08-06 12:38:44 -04:00
Luke Parker
bebe2fae0e cargo update due to substrate changes, again 2023-08-06 10:11:44 -04:00
Luke Parker
adb48cd737 Use 1.71.1 for reproducible runtime
1.71.1 fixes a CVE which wasn't at risk here yet still should be resolved.
2023-08-06 10:11:01 -04:00
Luke Parker
363a88c4ec Again update after latest series of substrate prunes 2023-08-06 08:04:56 -04:00
Luke Parker
5ba1dd2524 cargo update 2023-08-06 02:36:16 -04:00
Luke Parker
784c7b47a4 Add Immunefi to README 2023-08-04 12:28:15 -04:00
Luke Parker
376b36974f Stub binaries' code when features binaries is not set
Allows running `cargo build` in monero-serai and message-queue without
erroring, since it'd automatically try to build the binaries which require
additional features.

While we could make those features not optional, it'd increase time to build
and disk space required, which is why the features exist for monero-serai and
message-queue in the first place (since both are frequently used as libs).
2023-08-02 14:43:49 -04:00
Luke Parker
38ad1d4bc4 Add msrv definitions to common and crypto
This will effectively add msrv protections to the entire project as almost
everything grabs from these.

Doesn't add msrv to coins as coins/bitcoin is still frozen.

Doesn't add msrv to services since cargo msrv doesn't play nice with anything
importing the runtime.
2023-08-02 14:17:57 -04:00
Luke Parker
aab8a417db Have the Coordinator scan the Substrate genesis block
Also adds a workflow for running tests/coordinator.
2023-08-02 12:18:50 -04:00
Luke Parker
d5c787fea2 Add initial coordinator e2e tests 2023-08-01 19:00:48 -04:00
Luke Parker
e3a70ef0dc tests/processor clippy 2023-08-01 05:33:08 -04:00
Luke Parker
044b299cda cargo +nightly fmt (again) 2023-08-01 02:51:58 -04:00
Luke Parker
53d86e2a29 Latest clippy 2023-08-01 02:49:31 -04:00
Luke Parker
c338b92067 Update to latest nightly Rust 2023-08-01 00:48:11 -04:00
Luke Parker
3c38a0ec11 cargo +nightly fmt 2023-08-01 00:47:36 -04:00
Luke Parker
88f88b574c Sleep for up to a minute when creating a Coordinator if the network RPC has yet to boot
We've had a pair of CI failures due to calling add_block and being unable to
form an RPC connection. This attempts to fix this
2023-07-31 00:13:58 -04:00
Luke Parker
9f143a9742 Replace "coin" with "network"
The Processor's coins folder referred to the networks it could process, as did
its Coin trait. This, and other similar cases throughout the codebase, have now
been corrected.

Also corrects dated documentation for a key pair is confirmed under the
validator-sets pallet.
2023-07-30 16:11:30 -04:00
Luke Parker
2815046b21 Save the scheduler to disk
This is a horrible impl which does a full ser of everything on every change.
It's just the minimal changes to resolve this TODO and able testnet deployment.
2023-07-30 14:16:33 -04:00
Luke Parker
d8033504c8 Restore ports over expose so docker compose can be used for tests
Reduces security when considering a production deployment.
2023-07-30 13:37:10 -04:00
Luke Parker
48ad5fe20b Add icon as a svg 2023-07-30 12:51:17 -04:00
Luke Parker
4c801df4f2 Don't run apps in Docker as root 2023-07-30 08:04:36 -04:00
Luke Parker
9b79c4dc0c Test Eventuality completion via CoordinatorMessage::Completed 2023-07-30 07:00:54 -04:00
Luke Parker
e010d66c5d Only emit SignTranaction once
Due to the ordered message-queue, there's no benefit to multiple emissions as
there's no risk a completion will be missed. If it has yet to be read, sending
another which only be read after isn't helpful.

Simplifies code a decent bit.
2023-07-30 06:44:55 -04:00
Luke Parker
3d91fd88a3 Drastically increase placeholder fee for Monero
Needed for Docker tests.
2023-07-29 20:29:45 -04:00
Luke Parker
f988c43f8d Extend send_test with TX signing
Monero fails with fee_too_low, which this commit is meant to document.
2023-07-29 08:34:08 -04:00
Luke Parker
857e3ea72b cargo update 2023-07-29 07:11:53 -04:00
Luke Parker
8b14bb54bb Correct the fee amortization algorithm for an edge case
This is technically over-agressive, as a dropped output will reduce the fee,
yet this edge case is so minor the flow for it to not be over-aggressive (over
a few fractions of a cent) is by no means worth it.

Fixes the crash causable by the WIP send_test.
2023-07-29 07:05:55 -04:00
Luke Parker
f78332453b Start work on a send_test
Stops work where it does to the processor panickinng for Monero, yet not
Bitcoin, under what's present.

Cleans up processor tests to consolidate shared code.
2023-07-29 04:26:24 -04:00
Luke Parker
22da7aedde Implement a pretty Debug for various objects 2023-07-29 04:12:10 -04:00
Luke Parker
2641b83b3e Use resolver 2 2023-07-27 23:42:24 -04:00
Luke Parker
4980e6b704 cargo update 2023-07-27 23:38:04 -04:00
Luke Parker
c091b86919 Correct reproducible-runtime deny definition 2023-07-27 22:25:05 -04:00
Luke Parker
a8c7bb96c8 Add a crate to test the runtime can be reproducibly built 2023-07-27 21:42:26 -04:00
Luke Parker
09a95c9bd2 Rename deploy to orchestration
Also updates README to note prior unnoted folders.
2023-07-27 03:19:35 -04:00
Luke Parker
101da0a641 Use a BatchVerifier in reserialize_chain 2023-07-27 03:05:39 -04:00
Luke Parker
6d5851a9ee Use lz4 instead of zstd for the DB
zstd was recommended for the base layer only, due to its CPU requirements. That
was a misreading on mhy behalf.

lz4 gets ~5% better compression than snappy with ~30% faster performance. zstd
does ~25% better than lz4 yet at ~30% of the performance.
2023-07-26 14:05:10 -04:00
Luke Parker
64c309f8db Test Batches with Instructions 2023-07-26 14:02:17 -04:00
Luke Parker
e00aa3031c Have Shorthand::Raw contain RefundableInInstruction, not an encoded RII 2023-07-26 14:00:30 -04:00
Luke Parker
f8afb040dc Remove ApplicationCall
We can simply inline `Dex` into the InInstruction enum.
2023-07-26 12:45:51 -04:00
Luke Parker
7823ece4fe Test multiple batches, re-attempts, randomized selected signers 2023-07-26 05:55:47 -04:00
Luke Parker
b205391b28 Add libssl-dev to message-queue build packages 2023-07-26 04:33:36 -04:00
Luke Parker
32a937ddb9 Move the Monero Dockerfile to alpine 2023-07-26 04:22:41 -04:00
Luke Parker
bdbeedc723 Add pkg-config to Dockerfiles 2023-07-26 03:57:13 -04:00
Steven Chang
f306618e84 docs/protocol/Staking.md: delete 2023-07-26 03:46:55 -04:00
Luke Parker
89865b549c Update Serai node Dockerfile 2023-07-26 03:45:30 -04:00
Luke Parker
39eae2795f Update Dockerfiles to bookworm, successfully
Removes use of the Parity CI image. Always uses slim variants.
2023-07-26 03:06:01 -04:00
Luke Parker
0eb56406a4 Further dependency minimization for build times 2023-07-26 03:03:44 -04:00
Luke Parker
afb385fba4 Use spin's Once for OnceLock 2023-07-26 02:59:24 -04:00
Luke Parker
821f5d8de4 Restore create_if_missing to RocksDB code 2023-07-25 23:00:10 -04:00
Luke Parker
3862731a12 Minimize features pulled in to try and reduce build times 2023-07-25 22:29:39 -04:00
Luke Parker
42eb674d1a Print docker build 2023-07-25 22:18:20 -04:00
Luke Parker
1af5f1bcdc Revert from bookworm to bullseye
Parity's CI uses bullseye and bullseye has an incompatible, dynamically linked
openssl.
2023-07-25 22:15:33 -04:00
Luke Parker
32435d8a4c Consolidate RockDB code
Moves explicitly to zstd. RockDB recommends zstd, or at least lz4 over snappy,
and this minimizes which dependencies we pull in.
2023-07-25 21:43:27 -04:00
Luke Parker
49ce792b91 Move from bullseye-slim to bookworm-slim
Also moves from bullseye in the processor to bullseye-slim. This requires
adding back the apt intall, yet the tests/docker cache should handle it.

Minimizes processor image surface, hopefully also shrinks the CI down a bit.
2023-07-25 21:10:28 -04:00
Luke Parker
4949793c3f Clear docker cache after building in CI
We're at the CI storage limits, so hopefully this helps.
2023-07-25 21:09:40 -04:00
Luke Parker
61d46dccd4 Rename scan_test to batch_test 2023-07-25 18:10:05 -04:00
Luke Parker
88a1fce15c Test the processor's batch signing
Updates message-queue ot try recv every second, not 5.
2023-07-25 18:09:23 -04:00
Luke Parker
a2493cfafc Sub-CoordinatorMessage -> CoordinatorMessage via From/Into 2023-07-25 17:33:05 -04:00
Luke Parker
e3de64d5ff Check the processors picked up the received input 2023-07-24 22:11:58 -04:00
Luke Parker
ecd0457d5b clippy fixes 2023-07-24 21:49:51 -04:00
Luke Parker
7990ee689a Send to a processor from a test
Mainly here to build out the infra. Does not automate checking
recipience/batch creation yet.
2023-07-24 20:06:05 -04:00
Luke Parker
f05e909d0e Fix which key is used to index substrate_signers on ScannerEvent::Block
First notably bug found by docker tests.
2023-07-24 19:38:31 -04:00
Luke Parker
5e565fa3ef Correct when the Processor starts using the first key
It waited for CONFIRMATIONS + 1 confirmations, instead of CONFIRMATIONS
confirmations.

Also adds a lib interface to access the coin traits and its constants.
2023-07-24 15:36:35 -04:00
Luke Parker
6df1b46313 Don't use dbg for printing stdout/stderr
They are byte buffers, not strings. A pretty print has been added accordingly.
2023-07-24 15:35:43 -04:00
Luke Parker
24dba66bad cargo update 2023-07-24 04:54:13 -04:00
Luke Parker
fd585d496c Resolve #321 2023-07-24 04:53:59 -04:00
Luke Parker
5703591eb2 Extend critria to run Docker tests
The unit tests should be sufficient for these cases, making this exraneous, yet
better to be complete than at risk.
2023-07-24 02:56:58 -04:00
Luke Parker
9ac3b203c8 Fix panic causable by remote node 2023-07-24 02:53:54 -04:00
Luke Parker
23e1c9769c dalek 4.0 2023-07-23 14:32:14 -04:00
Luke Parker
8e6e05ae2d Set better logging defaults for Docker tests 2023-07-23 10:13:11 -04:00
Luke Parker
713660c79c Make key_gen a gadget, add ConfirmKeyPair 2023-07-22 05:10:40 -04:00
Luke Parker
cb8c8031b0 Correct retrieval of LastTagTime when the Docker image doesn't already exist 2023-07-22 04:37:45 -04:00
Luke Parker
d07447fe97 Implement an (almost) full Key Gen test for processor's Docker tests
It doesn't confirm the key pair yet.

Adds the infra neded to test processors against each other.
2023-07-22 04:06:44 -04:00
Luke Parker
c26beae0f9 Only rebuild Docker images when their source has been modified 2023-07-22 02:40:14 -04:00
Luke Parker
818215b570 Correct Dockerfile caching 2023-07-22 01:12:39 -04:00
Luke Parker
79943c3a6c MessageQueue::new 2023-07-22 01:12:15 -04:00
Luke Parker
076a8e4d62 Add the requirement for a debug Serai node to running tests documentation 2023-07-21 15:11:22 -04:00
Luke Parker
ffd1457927 Correct message-queue Dockerfile
It worked with my cache yet not without cache.
2023-07-21 15:08:37 -04:00
Luke Parker
523a055b74 Add processor Docker tests
Adds tests/docker for code common to Docker-based tests.
2023-07-21 14:08:42 -04:00
Luke Parker
624fb2781d Update how RPCs are handled
The processor now takes three vars and joins them itself. message-queue uses a
single argument, with defaults, as it's a service we control.
2023-07-21 14:01:42 -04:00
Luke Parker
641077a089 Use non-slim variants to remove needing to apt additional packages
Prevents needing to rebuild all the time via allowing cache hits.
2023-07-21 13:58:54 -04:00
Luke Parker
298d1fd3ba Slim coins and processor Dockerfiles 2023-07-21 04:27:14 -04:00
Luke Parker
92c3403698 Slim down the message-queue Dockerfile
While I tried Alpine, RocksDB won't build unless it's statically linked. Then
async-trait (and proc-macro's in general) won't compile when statically linked.
2023-07-21 04:07:32 -04:00
Luke Parker
37af8b51b3 Fallback to pgrep if pidof is unavailable 2023-07-21 03:37:48 -04:00
Luke Parker
900298b94b CI tweaks 2023-07-20 19:34:10 -04:00
Luke Parker
9effd5ccdc Add a Docker-based test for the message-queue service 2023-07-20 18:53:11 -04:00
Luke Parker
ceeb57470f Print when ConnectionErrors occur in reserialize_chain 2023-07-20 18:53:11 -04:00
Steven Chang
69454fa9bb .rustmfmt.toml: add edition 2023-07-20 15:28:03 -04:00
Luke Parker
5121ca7519 Handle the minimum relay fee 2023-07-20 01:20:28 -04:00
Luke Parker
1eb3b364f4 Correct dust constant 2023-07-20 00:29:31 -04:00
Luke Parker
f66fe3c1cb 3.10 Remove use of Network::Bitcoin
All uses were safe due to addresses being converted to script_pubkeys which
don't embed their network. The only risk of there being an issue is if a
future address spec did embed the net ID into the script_pubkey and that was
moved to.

This resolves the audit note and does offer that tightening.
2023-07-20 00:27:56 -04:00
Luke Parker
6f9d02fdf8 3.11 Better document API expectations 2023-07-19 23:51:21 -04:00
Luke Parker
28613400b8 message-queue and processor Dockerfiles 2023-07-19 20:13:16 -04:00
Luke Parker
b6579d5a2a Update Dockerfiles
Cleans files, updates Bitcoin and Monero versions, fixes Monero immediately
exiting.
2023-07-19 19:22:49 -04:00
Luke Parker
c2f32e7882 Update to FROST v14 2023-07-19 15:48:34 -04:00
Justin Berman
228e36a12d monero-serai: fee calculation parity with Monero's wallet2 (#301)
* monero-serai: fee calculation parity with Monero's wallet2

* Minor lint

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-07-19 15:06:05 -04:00
Luke Parker
38bcedbf11 Cargo.lock for prior commit 2023-07-19 00:58:46 -04:00
Luke Parker
98f9fc2c2f Pin to serde 1.0.167 due to https://github.com/monero-rs/monero-rs/issues/162 2023-07-19 00:53:03 -04:00
Luke Parker
d5cfb3fb25 Corrections from renaming Index to Nonce 2023-07-18 23:52:37 -04:00
Luke Parker
b1dbe1f50d hashbrown 0.14 in validator-sets 2023-07-18 23:25:15 -04:00
Luke Parker
1b57d655ed Update to subxt 0.29 2023-07-18 23:01:51 -04:00
Luke Parker
64402914ba Update substrate 2023-07-18 22:30:55 -04:00
Luke Parker
a7c9c1ef55 Integrate coordinator with MessageQueue and RocksDB
Also resolves a couple TODOs.
2023-07-18 01:53:51 -04:00
Luke Parker
a05961974a Lint a couple TODOs in the processor 2023-07-17 18:11:21 -04:00
Luke Parker
acc9495429 Use MessageQueue instead of MemCoordinator in processor
Also has it use RocksDB.
2023-07-17 18:02:29 -04:00
Luke Parker
344ac9cbfc Add ack signatures
Also modifies message signatures to be binding to from, not just from's key.
2023-07-17 17:40:34 -04:00
Luke Parker
6ccac2d0ab Add a message-queue connection to processor
Still needs love, yet should get us closer to starting testing.
2023-07-17 15:49:17 -04:00
Luke Parker
56f7037084 Correct get_o_indexes to work for 0-output TXs 2023-07-17 12:57:17 -04:00
Luke Parker
a0f8214d48 Resolve clippy 2023-07-17 10:53:47 -04:00
Luke Parker
5f93140ba5 Have reserialize_chain automatically retry on ConnectionError
Fixes expectations of formatting by expect as well.
2023-07-17 03:14:49 -04:00
Luke Parker
712f11d879 Correct the message-queue's handling of var 2023-07-17 02:01:31 -04:00
Luke Parker
808a633e4d Split CI to reduce tests run
common/ is now only run when common is edited. crypto/ when common/ or crypto/.
coins/ when common/ or crypto/ or coins/. The rest of the tests are run
whenever any package is edited (as they're all inter-connected).
2023-07-17 01:06:56 -04:00
Luke Parker
0a367bfbda Add common crate to access env variables
In the future, we should use a proper secret store (not just env variables).
This lets us update one block of code and not n in the future.
2023-07-17 00:53:05 -04:00
Luke Parker
845c2842b5 message-queue RocksDB + fix listening 2023-07-17 00:22:55 -04:00
Luke Parker
8543487db2 Document message-queue RPC methods 2023-07-16 20:53:58 -04:00
Luke Parker
a9072e6b1b Remove signature from get_next_message
Duee to signature replaying, it's very annoying to provide meanigful data
access privacy. None of these messages should be private/have sensitive data
anyways though.
2023-07-16 20:38:13 -04:00
Luke Parker
b0c28a1cf0 Move message_queue over to deduplication via intents
Due to each service having multiple distinct clocks, we can't expect a stable
ordering except the ordering an intact message-queue provides. The messages
emitted should be consistent however, solely with unknown order, which is why
we can craft intents based on their contents (already implemented by
processor-messages).
2023-07-16 20:34:35 -04:00
akildemir
23b9d57305 add polyseed support (#257)
* add polyseed support

* fix pr comments

* fix tests

* Embed the mempool into the Blockchain

* Plan scheduled payments whenever outputs are received

The scheduler prior waited for the next series of payments to be added.

* Replace Tendermint step with sync_block

Step moved a step forward after an externally synced/added block. This created
a race condition to add the block between the sync process and the Tendermint
machine. Now that the block routes through Tendermint, there is no such race
condition.

* Finish binding Tendermint into Tributary and define a Tributary master object

* Add correction the last commit missed

* Add DoS limits to tributary and require provided transactions be ordered

* Fix the scheduler from dropping UTXOs when there weren't any payments

* Documentation and cargo update

* Add a dedicated db crate with a basic DB trait

It's needed by the processor and tributary (coordinator).

* Add a DB to Tributary

Adds support for reloading most of the blockchain.

* Reloaded provided transactions from the disk

Also resolves a race condition by asserting provided transactions must be
unique, allowing them to be safely provided multiple times.

* must_use annotations on DbTxn

* Support reloading the mempool from disk

* Add a NewSet event to validator-sets

Updates to the latest serai-dex/substrate due to depending on
10ccaca0eb498a2316bbf627d419b29b1a75933a.

* Add basic getters to tributary

* cargo update

* Update to the latest subxt

Writes a custom unsigned extrinic creator due to subxt having an internal error
with the scale metadata. While the code in our scope increased, it's much more
ergonomic to our usage. We may end up rewriting most of subxt, eventually.

* Make unsigned private due to unsafe calling potential

* Start defining the coordinator

* Merge AckBlock with Burns

Offers greater efficiency while reducing concerns re: atomicity.

* Correct processor flow to have the coordinator decide signing set/re-attempts

The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.

Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.

* cargo +nightly fmt

* cargo update

Since p256 now pulls in an extra crate with this update, the {k,p}256 imports
disable default-features to prevent growing the tree.

* Support extracting timestamps from blocks

* Make progres on handling NewSet events

Further bones out the coordinator.

* Resolve #245

* Have InInstructions track the latest block for a network in storage

* Fill out code for the rest of the Substrate events

* Clean up the Substrate block processing code

* Rename transaction file to tributary, add function for genesis

* Add a processor API to the coordinator

* Add extensive commentary on mutable to the processor's main file

Clearly establishes why consistency is guaranteed from a Rust borrow-checker
mindset. While there are plenty of... 'violations', they're clearly explained.

Hopefully, this method of thinking helps promote/ensure consistency in the
future.

* Move ConfirmKeyPair from key_gen to substrate

Clarifies the emitter and accordingly why its mutations are justified.

* Remove BatchSigned

SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.

* Add note to processor_messages

* Use a single txn for an entire coordinator message

Removes direct DB accesses whre possible. Documents the safety of the rest.
Does uncover one case of unsafety not previously noted.

* cargo update to remove usage of yanked crate

* Clarify safety of Scanner::block_number and KeyGen::keys

* Tweak ConfirmKeyPair to alleviate database requirements of coordinator

* Use an enum for Coin/NetworkId

It originally wasn't an enum so software which had yet to update before an
integration wouldn't error (as now enums are strictly typed). The strict typing
is preferable though.

* Code a method to determine the activation block before any block has consensus

[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.

This doesn't use an Option for ergonomic reasons.

* automate whitespace & trimming test cases

* Save keys by their tweaked group_key

Keys are referred to by their tweaked versions. If a tweak was needed, keys
would fail to confirm.

* Use crypto-bigint's reduction in ed448

Achieves feasible performance in the ed448 which makes it potentially viable
for real world usage.

Accordingly prepares a new release, updating the README.

* Move the entirety of ed448 to Residue, offering a further 2-4x speedup

* Resolve #68

Notably speeds up monero-serai's build and CLSAG performance.

* Make MainDB into SubstrateDB

* Initial Tributary handling

* Add additional checks to key_gen/sign

There is the ability to cause state bloat by flooding Tributary.
KeyGen/Sign specifically shouldn't allow bloat since we check the
commitments/preprocesses/shares for validity. Accordingly, any invalid data
(such as bloat) should be detected.

It was posssible to place bloat after the valid data. Doing so would be
considered a valid KeyGen/Sign message, yet could add up to 50k kB per sign.

* Apply DKG TX handling code to all sign TXs

The existing code was almost entirely applicable. It just needed to be scoped
with an ID. While the handle function is now a bit convoluted, I don't see a
better option.

* Split FinalizedBlock into ExternalBlock and SeraiBlock

Also re-arranges their orders.

* Add support for multiple orderings in Provided

Necessary as our Tributary chains needed to agree when a Serai block has
occurred, and when a Monero block has occurred. Since those could happen at the
same time, some validators may put SeraiBlock before ExternalBlock and vice
versa, causing a chain halt. Now they can have distinct ordering queues.

* Slash on unrecognized ID

* ExternalBlock handler

* Add a SubstrateBlockAck message to the processor

When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.

This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.

Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.

This creates an explicitly proper async flow not reliant on waiting for data
availability.

Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.

* Route the SubstrateBlock message, which is the last Tributary transaction type

* Add recent bloat checks added to signer to substrate_signer as well

* Add no_std support to transcript, dalek-ff-group, ed448, ciphersuite, multiexp, schnorr, and monero-generators

transcript, dalek-ff-group, ed449, and ciphersuite are all usable with no_std
alone. The rest additionally require alloc.

Part of #279.

* Add a test to the coordinator for running a Tributary

Impls a LocalP2p for testing.

Moves rebroadcasting into Tendermint, since it's what knows if a message is
fully valid + original.

Removes TributarySpec::validators() HashMap, as its non-determinism caused
different instances to have different round robin schedules. It was already
prior moved to a Vec for this issue, so I'm unsure why this remnant existed.

Also renames the GH no-std workflow from the prior commit.

* Add a test for Tributary

Further fleshes out the Tributary testing code.

* Test handling of DKG commitments transactions

* Add Transaction::sign.

While I don't love the introduction of empty_signed, it's practically fine.

* Tributary test wait_for_tx_inclusion function

* Additionally test DKGShares

* Handle adding new Tributaries

Removes last_block as an argument from Tendermint. It now loads from the DB as
needed. While slightly less performant, it's easiest and should be fine.

* Reload Tributaries

add_active_tributary writes the spec to disk before it returns, so even if the
VecDeque it pushes to isn't popped, the tributary will still be loaded on boot.

* Start handling P2P messages

This defines the tart of a very complex series of locks I'm really unhappy
with. At the same time, there's not immediately a better solution. This also
should work without issue.

* Clarify Arc RwLocks and sleeps in coordinator

* Send a heartbeat message when a Tributary falls behind

* cargo fmt

* cargo update

* Move json word lists to rs

Allows building the seed code without serde_json.

* Break coordinator main into multiple functions

Also moves from std::sync::RwLock to tokio::sync::RwLock to prevent wasting
cycles on spinning.

* Remove reliance on a blockchain read lock from block/commit

* Implement Tributary syncing

Also adds a forwards-lookup to the Tributary blockchain.

* Don't return from sync_block until the Tendermint machine returns if it's valid or not

We had a race condition where'd we be informed of blocks 1 .. 3, and
immediately add 1 .. 3. Because we immediately tried to add 2 after 1, it'd
fail since the tip was still the genesis, yet 2 needs the tip to be 1.

Adding a channel, while ugly, was the simplest way to accomplish this.

Also has any added block be broadcasted. Else there's a race condition where a
node which syncs up to the most recent block does so, yet fails to add the next
block when it's committed to.

* Test handle_p2p and Tributary syncing

Includes bug fixes.

* Tweak tests workflow

* Add a TributaryReader which doesn't require a borrow to operate

Reduces lock contention.

Additionally changes block_key to include the genesis. While not technically
needed, the lack of genesis introduced a side effect where any Tributary on the
the database could return the block of any other Tributary. While that wasn't a
security issue, returning it suggested it was on-chain when it wasn't. This may
have been usable to create issues.

* Document panic in FROST

* Document a pair of panics requiring 256 GB of RAM/4 GB of a context

* Add a UID function to messages

When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.

Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.

* Initial code to handle messages from processors

* Document the processor/tributary/coordinator/serai flow

* Have Coordinator MainDb take a mutable borrow

* Update to substrate polkadot-v0.9.42

* Correct error message in ff-group-tests

* Update to May's nightly

Doesn't use the PR due to the needed changes.

* Support arbitrary RPC providers in monero-serai

Sets a clean path for no-std premised RPCs (buffers to an external RPC impl)/
Tor-based RPCs/client-side load balancing/...

* Correct processor's handling of the new Monero RPC code

* Correct Serai Dockerfile

* Publish ExternablBlock/SubstrateBlock, delay *Preprocess until ID acknowledged

Adds a channel for the Tributary scanner to communicate when an ID has been
acknowledged.

* Rename uid to intent

* Use U448 for Ed448 instead of U512

* Spawn a new async task for each block message

This probably should be done with n-long lived tasks, one per Tributary. While
this may not be suitably performant long-term (potential DoS vector), this at
least resolves the halting concerns.

* Move the coordinator to a n-processor design

* Ensure Tributary commits are minimal

* Properly get genesis for a Processor message

* Create a vote transaction upon GeneratedKeyPair

* Remove TODO about code de-duplication

It's infeasible to write a macro/function there. Does add a type alias which
makes things cleaner.

* Have coordinator publish batches to Substrate

* Implement MuSig key aggregation into DKG

Isn't spec compliant due to the lack of a spec to be compliant too.

Slight deviation from the paper by using a unique list instead of a multiset.

Closes #186, progresses #277.

* Correct 2/3rds definitions throughout the codebase

The prior formula failed for some values, such as 20.
20 / 3 = 6, * 2 = 12, + 1 = 13. 13 is 65%, not >= 67.

* cargo update

Resolves a yanked crate and removes some duplicated dependencies.

* Add a dedicated function to get a MuSig key

* Do the minimal amount of work for dkg to compile under no-std

The Substrate runtime requires access to the MuSig key aggregation function.

\#279 related.

* Use a MuSig signature to publish validator set key pairs to Serai

The processor/coordinator flow still has to be rewritten.

* Correct various no_std definitions

* Add a context to MuSig key aggregation

* Use proper messages for ValidatorSets/InInstructions pallet

Provides a DST, and associated metadata as beneficial.

Also utilizes MuSig's context to session-bind. Since set_keys_messages also
binds to set, this is semi-redundant, yet that's appreciated.

* Remove signed Substrate TXs from Coordinator

* Only scan v2 Monero TXs

* Fix for prior commit

* Ensure canonical points in the cross-group DLEq proof

* Fix incorrect sig_hash generation

sig_hash was used as a challenge. challenges should be of the form H(R, A, m).
These sig hashes were solely H(A, m), allowing trivial forgeries.

* cargo update

Resolves an openssl advisory and nets ~-8 crates.

* Build no-std tests with RISC-V 32 IMAC

Turns out wasm still has std, making it suboptimal to use here.

* Pin setup-protoc to v2.0.0

* Update to substrate polkadot-v0.9.43

* fix tributary sync test

* Slight terminology correction in sync test

Also correct a mistake from merging the most recent polkadot version.

* Update nightly

* Replace lazy_static with OnceLock inside monero-serai

lazy_static, if no_std environments were used, effectively required always
using spin locks. This resolves the ergonomics of that while adopting Rust std
code.

no_std does still use a spin based solution. Theoretically, we could use
atomics, yet writing our own Mutex wasn't a priority.

* no-std support for monero-serai (#311)

* Move monero-serai from std to std-shims, where possible

* no-std fixes

* Make the HttpRpc its own feature, thiserror only on std

* Drop monero-rs's epee for a homegrown one

We only need it for a single function. While I tried jeffro's, it didn't work
out of the box, had three unimplemented!s, and is no where near viable for
no_std.

Fixes #182, though should be further tested.

* no-std monero-serai

* Allow base58-monero via git

* cargo fmt

* Represent RCT amounts with None, not 0.

Fixes #282.

Does allow any v1 TXs which exist, and v2 miner-TXs, to specify Some(0). As far
as I can tell, both were/are theoreitcally possible.

* Add a message queue

This is intended to be a reliable transport between the processors and
coordinator. Since it'll be intranet only, it's written as never fail.

Primarily needs testing and a proper ID.

* cargo update

Resolves https://github.com/serai-dex/serai/security/dependabot/29

* Correct deny.toml with inclusion of message-queue

* Update nightly

* std-shims: six `Read` for &[u8]

* Use serai- prefixes on Serai-specific packages

Fixes deny.toml, also runs a minor cargo update shrinking the tree.

* Update monero-tests workflow to new name for the processor

* Correct depends for processor-messages

* Disable Rust caching

We hit the cache limit after just one or two builds, making it infeasible.

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

* Monero: support for legacy transactions (#308)

* add mlsag

* fix last commit

* fix miner v1 txs

* fix non-miner v1 txs

* add borromean + fix mlsag

* add block hash calculations

* fix for the jokester that added unreduced scalars

to the borromean signature of
2368d846e671bf79a1f84c6d3af9f0bfe296f043f50cf17ae5e485384a53707b

* Add Borromean range proof verifying functionality

* Add MLSAG verifying functionality

* fmt & clippy :)

* update MLSAG, ss2_elements will always be 2

* Add MgSig proving

* Tidy block.rs

* Tidy Borromean, fix bugs in last commit, replace todo! with unreachable!

* Mark legacy EcdhInfo amount decryption as experimental

* Correct comments

* Write a new impl of the merkle algorithm

This one tries to be understandable.

* Only pull in things only needed for experimental when experimental

* Stop caching the Monero block hash now in processor that we have Block::hash

* Corrections for recent processor commit

* Use a clearer algorithm for the merkle

Should also be more efficient due to not shifting as often.

* Tidy Mlsag

* Remove verify_rct_* from Mlsag

Both methods were ports from Monero, overtly specific without clear
documentation. They need to be added back in, with documentation, or included
in a node which provides the necessary further context for them to be naturally
understandable.

* Move mlsag/mod.rs to mlsag.rs

This should only be a folder if it has multiple files.

* Replace EcdhInfo terminology

The ECDH encrypted the amount, yet this struct contained the encrypted amount,
not some ECDH.

Also corrects the types on the original EcdhInfo struct.

* Correct handling of commitment masks when scanning

* Route read_array through read_raw_vec

* Misc lint

* Make a proper RctType enum

No longer caches RctType in the RctSignatures as well.

* Replace Vec<Bulletproofs> with Bulletproofs

Monero uses aggregated range proofs, so there's only ever one Bulletproof. This
is enforced with a consensus rule as well, making this safe.

As for why Monero uses a vec, it's probably due to the lack of variadic typing
used. Its effectively an Option for them, yet we don't need an Option since we
do have variadic typing (enums).

* Add necessary checks to Eventuality re: supported protocols

* Fix for block 202612 and fix merkel root calculations

* MLSAG (de)serialisation fix

ss_2_elements will not always be 2 as rct type 1 transactions are not enforced to have one input

* Revert "MLSAG (de)serialisation fix"

This reverts commit 5e710e0c96.

here it checks number of MGs == number of inputs:
0a1eaf26f9/src/cryptonote_core/tx_verification_utils.cpp (L60-59)

and here it checks for RctTypeFull number of MGs == 1:
0a1eaf26f9/src/ringct/rctSigs.cpp (L1325)

so number of inputs == 1
so ss_2_elements == 2

* update `MlsagAggregate` comment

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>

* Fix the known issue with the DSA

I wrote it to only select TXs with a timelock, not only TXs which are unlocked.
This most likely explains why it so heavily selected coinbases.

Also moves an InternalError which would've never been hit on mainnet, yet
technically isn't an invariant, to only exist when cfg(test).

* Add a bin to download a chain, over RPC, reserializing and hashing every item

Parallelized. Doesn't check the deserialization is correct. Does use distinct,
persistent HTTP clients.

* Correct how Monero integration tests are run

* Support multiple RPCs in the reserialize_chain bin

* Don't call get_height every block

* Modify get_transactions to split requests as to not hit the restricted RPC limits

* Meaningful changes from aggressive-clippy

I do want to enable a few specific lints, yet aggressive-clippy as a whole
isn't worthwhile.

* Extend reserialize_chain with CLSAG/BP(+) verification

* Remove spammy println from reserialize_chain

* Update reserialize_chain for v1 and migration TXs

Also always marks 0-amount inputs as RCT due to impossibility of non-RCT
0-amount outputs.

* Only deserialize RctSignatures where's there at least one input

This is only enforced by the Monero protocol due to a single check the mixRing
isn't empty in get_pre_mlsag_hash. The value in ensuring there's a least one
input is to ensure the safety of our rct_type functions, which determines the
RctType based off structural analysis (specifically, input data if
MlsagBorromean).

rct_type was technically safe without this. A 0-input transaction would be
mis-classified as RctFull/MlsagAggregate, which would then make the
RctSignatures invalid for being RctFull (requiring exactly one input) yet not
having inputs, meaning an invalid RctSignatures would be mis-classified yet
still invalid.

This just removes the risk of mis-classification in the first place, tightening
the library's safety.

* docs/Getting Started.md: cargo build --release --all-features

* Fix the known instance of #295

* Bind RocksDB into serai-db

* Split up tests in CI to avoid node storage limits

* Corrections to prior commit

* Again

I called git commit --amend without calling git add . again :(

* Update the flow for completed signing processes

Now, an on-chain transaction exists. This resolves some ambiguities and
provides greater coordination.

* Clean Polyseed code

* Final tweaks

* Correct no-std builds for Polyseed

* Again correct no-std

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
Co-authored-by: GitHub Actions <unknown>
Co-authored-by: Boog900 <54e72d8a-345f-4599-bd90-c6b9bc7d0ec5@aleeas.com>
Co-authored-by: Boog900 <108027008+Boog900@users.noreply.github.com>
Co-authored-by: Steven Chang <stevenchang5000@gmail.com>
2023-07-16 07:25:17 -04:00
Luke Parker
807ec30762 Update the flow for completed signing processes
Now, an on-chain transaction exists. This resolves some ambiguities and
provides greater coordination.
2023-07-14 14:05:12 -04:00
Luke Parker
5424886d63 Again
I called git commit --amend without calling git add . again :(
2023-07-14 13:13:38 -04:00
Luke Parker
2bebe0755d Corrections to prior commit 2023-07-14 13:11:01 -04:00
Luke Parker
f0ce6e6388 Split up tests in CI to avoid node storage limits 2023-07-14 13:02:58 -04:00
Luke Parker
62504b2622 Bind RocksDB into serai-db 2023-07-13 19:09:11 -04:00
Luke Parker
c9bb284570 Fix the known instance of #295 2023-07-13 14:02:57 -04:00
Steven Chang
6a4bccba74 docs/Getting Started.md: cargo build --release --all-features 2023-07-13 13:04:51 -04:00
Luke Parker
df67b7d94c 3.13 Better document the offset mapping 2023-07-10 15:02:34 -04:00
Luke Parker
677b9b681f 3.9/3.10. 3.9: Remove cast which fails on a several GB malicious TX
3.10 has its impossibility documented. A malicious RPC cananot effect this code.
2023-07-10 14:44:18 -04:00
Luke Parker
fa1b569b78 3.8 Document termination of unbounded loop 2023-07-10 14:34:32 -04:00
Luke Parker
d75115ce13 3.7 Replace unwraps with expects
Doesn't replace unwraps on integer conversions.
2023-07-10 14:02:59 -04:00
Luke Parker
3d00d405a3 3.5 (handled with 3.1) 2023-07-10 13:35:47 -04:00
Luke Parker
3480fc5e16 3.4 2023-07-10 13:33:08 -04:00
Luke Parker
7fa5d291b8 Implement a more robust validity check on connection creation 2023-07-09 15:49:35 -04:00
Luke Parker
b54548b13a Only deserialize RctSignatures where's there at least one input
This is only enforced by the Monero protocol due to a single check the mixRing
isn't empty in get_pre_mlsag_hash. The value in ensuring there's a least one
input is to ensure the safety of our rct_type functions, which determines the
RctType based off structural analysis (specifically, input data if
MlsagBorromean).

rct_type was technically safe without this. A 0-input transaction would be
mis-classified as RctFull/MlsagAggregate, which would then make the
RctSignatures invalid for being RctFull (requiring exactly one input) yet not
having inputs, meaning an invalid RctSignatures would be mis-classified yet
still invalid.

This just removes the risk of mis-classification in the first place, tightening
the library's safety.
2023-07-09 00:44:23 -04:00
Luke Parker
c878d38c60 3.1 2023-07-08 21:48:18 -04:00
Luke Parker
5d9067b84d Update reserialize_chain for v1 and migration TXs
Also always marks 0-amount inputs as RCT due to impossibility of non-RCT
0-amount outputs.
2023-07-08 21:09:22 -04:00
Luke Parker
13f48a406e Remove spammy println from reserialize_chain 2023-07-08 20:30:45 -04:00
Luke Parker
35fcd11096 Extend reserialize_chain with CLSAG/BP(+) verification 2023-07-08 20:29:55 -04:00
Luke Parker
93b1656f86 Meaningful changes from aggressive-clippy
I do want to enable a few specific lints, yet aggressive-clippy as a whole
isn't worthwhile.
2023-07-08 11:29:07 -04:00
Luke Parker
3c6cc42c23 Modify get_transactions to split requests as to not hit the restricted RPC limits 2023-07-05 22:12:34 -04:00
Luke Parker
93fe8a52dd Don't call get_height every block 2023-07-05 20:11:31 -04:00
Luke Parker
249f7b904f Support multiple RPCs in the reserialize_chain bin 2023-07-05 19:50:35 -04:00
Luke Parker
8ce8657d34 Correct how Monero integration tests are run 2023-07-05 19:11:52 -04:00
Luke Parker
e5a196504c Add a bin to download a chain, over RPC, reserializing and hashing every item
Parallelized. Doesn't check the deserialization is correct. Does use distinct,
persistent HTTP clients.
2023-07-05 18:59:22 -04:00
Luke Parker
b195db0929 Fix the known issue with the DSA
I wrote it to only select TXs with a timelock, not only TXs which are unlocked.
This most likely explains why it so heavily selected coinbases.

Also moves an InternalError which would've never been hit on mainnet, yet
technically isn't an invariant, to only exist when cfg(test).
2023-07-04 18:11:57 -04:00
Boog900
89eef95fb3 Monero: support for legacy transactions (#308)
* add mlsag

* fix last commit

* fix miner v1 txs

* fix non-miner v1 txs

* add borromean + fix mlsag

* add block hash calculations

* fix for the jokester that added unreduced scalars

to the borromean signature of
2368d846e671bf79a1f84c6d3af9f0bfe296f043f50cf17ae5e485384a53707b

* Add Borromean range proof verifying functionality

* Add MLSAG verifying functionality

* fmt & clippy :)

* update MLSAG, ss2_elements will always be 2

* Add MgSig proving

* Tidy block.rs

* Tidy Borromean, fix bugs in last commit, replace todo! with unreachable!

* Mark legacy EcdhInfo amount decryption as experimental

* Correct comments

* Write a new impl of the merkle algorithm

This one tries to be understandable.

* Only pull in things only needed for experimental when experimental

* Stop caching the Monero block hash now in processor that we have Block::hash

* Corrections for recent processor commit

* Use a clearer algorithm for the merkle

Should also be more efficient due to not shifting as often.

* Tidy Mlsag

* Remove verify_rct_* from Mlsag

Both methods were ports from Monero, overtly specific without clear
documentation. They need to be added back in, with documentation, or included
in a node which provides the necessary further context for them to be naturally
understandable.

* Move mlsag/mod.rs to mlsag.rs

This should only be a folder if it has multiple files.

* Replace EcdhInfo terminology

The ECDH encrypted the amount, yet this struct contained the encrypted amount,
not some ECDH.

Also corrects the types on the original EcdhInfo struct.

* Correct handling of commitment masks when scanning

* Route read_array through read_raw_vec

* Misc lint

* Make a proper RctType enum

No longer caches RctType in the RctSignatures as well.

* Replace Vec<Bulletproofs> with Bulletproofs

Monero uses aggregated range proofs, so there's only ever one Bulletproof. This
is enforced with a consensus rule as well, making this safe.

As for why Monero uses a vec, it's probably due to the lack of variadic typing
used. Its effectively an Option for them, yet we don't need an Option since we
do have variadic typing (enums).

* Add necessary checks to Eventuality re: supported protocols

* Fix for block 202612 and fix merkel root calculations

* MLSAG (de)serialisation fix

ss_2_elements will not always be 2 as rct type 1 transactions are not enforced to have one input

* Revert "MLSAG (de)serialisation fix"

This reverts commit 5e710e0c96.

here it checks number of MGs == number of inputs:
0a1eaf26f9/src/cryptonote_core/tx_verification_utils.cpp (L60-59)

and here it checks for RctTypeFull number of MGs == 1:
0a1eaf26f9/src/ringct/rctSigs.cpp (L1325)

so number of inputs == 1
so ss_2_elements == 2

* update `MlsagAggregate` comment

* cargo update

Resolves a yanked crate

* Move location of serai-client in Cargo.toml

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-07-04 17:18:05 -04:00
Luke Parker
0f80f6ec7d Move location of serai-client in Cargo.toml 2023-07-04 13:21:52 -04:00
Luke Parker
740274210b cargo update
Resolves a yanked crate
2023-07-04 13:20:59 -04:00
Luke Parker
6ac57be4e3 Disable Rust caching
We hit the cache limit after just one or two builds, making it infeasible.
2023-07-03 18:03:09 -04:00
Luke Parker
08e7ca955b Correct depends for processor-messages 2023-07-03 12:40:56 -04:00
Luke Parker
239800cfcf Update monero-tests workflow to new name for the processor 2023-07-03 09:12:29 -04:00
Luke Parker
d49c636f0f Use serai- prefixes on Serai-specific packages
Fixes deny.toml, also runs a minor cargo update shrinking the tree.
2023-07-03 08:50:23 -04:00
Boog900
30834fe4d2 std-shims: six Read for &[u8] 2023-07-03 07:13:06 -04:00
GitHub Actions
d928b787f7 Update nightly 2023-07-03 07:10:53 -04:00
Luke Parker
c7b232949a Correct deny.toml with inclusion of message-queue 2023-07-03 07:09:35 -04:00
Luke Parker
acf2469dd8 cargo update
Resolves https://github.com/serai-dex/serai/security/dependabot/29
2023-07-01 20:27:03 -04:00
Luke Parker
6267acf3df Add a message queue
This is intended to be a reliable transport between the processors and
coordinator. Since it'll be intranet only, it's written as never fail.

Primarily needs testing and a proper ID.
2023-07-01 08:53:46 -04:00
Luke Parker
a95ecc2512 Represent RCT amounts with None, not 0.
Fixes #282.

Does allow any v1 TXs which exist, and v2 miner-TXs, to specify Some(0). As far
as I can tell, both were/are theoreitcally possible.
2023-06-29 13:16:51 -04:00
Luke Parker
ac708b3b2a no-std support for monero-serai (#311)
* Move monero-serai from std to std-shims, where possible

* no-std fixes

* Make the HttpRpc its own feature, thiserror only on std

* Drop monero-rs's epee for a homegrown one

We only need it for a single function. While I tried jeffro's, it didn't work
out of the box, had three unimplemented!s, and is no where near viable for
no_std.

Fixes #182, though should be further tested.

* no-std monero-serai

* Allow base58-monero via git

* cargo fmt
2023-06-29 04:14:29 -04:00
Luke Parker
d25c668ee4 Replace lazy_static with OnceLock inside monero-serai
lazy_static, if no_std environments were used, effectively required always
using spin locks. This resolves the ergonomics of that while adopting Rust std
code.

no_std does still use a spin based solution. Theoretically, we could use
atomics, yet writing our own Mutex wasn't a priority.
2023-06-28 21:45:57 -04:00
GitHub Actions
8ced63eaac Update nightly 2023-06-28 18:42:19 -04:00
Luke Parker
f6a497f3ac Slight terminology correction in sync test
Also correct a mistake from merging the most recent polkadot version.
2023-06-28 15:20:50 -04:00
akildemir
790fe7ee23 fix tributary sync test 2023-06-28 15:01:55 -04:00
Luke Parker
8c020abb86 Update to substrate polkadot-v0.9.43 2023-06-28 14:57:58 -04:00
Luke Parker
21f0bb2721 Pin setup-protoc to v2.0.0 2023-06-28 12:28:14 -04:00
Luke Parker
385ed2e97a Build no-std tests with RISC-V 32 IMAC
Turns out wasm still has std, making it suboptimal to use here.
2023-06-28 12:26:53 -04:00
Luke Parker
fca567f61d cargo update
Resolves an openssl advisory and nets ~-8 crates.
2023-06-22 06:25:33 -04:00
Luke Parker
dfa3106a38 Fix incorrect sig_hash generation
sig_hash was used as a challenge. challenges should be of the form H(R, A, m).
These sig hashes were solely H(A, m), allowing trivial forgeries.
2023-06-08 06:38:25 -04:00
Luke Parker
c6982b5dfc Ensure canonical points in the cross-group DLEq proof 2023-05-30 22:05:52 -04:00
Luke Parker
1aa293cc4a Fix for prior commit 2023-05-27 04:15:57 -04:00
Luke Parker
8a24fc39a6 Only scan v2 Monero TXs 2023-05-27 04:13:40 -04:00
Luke Parker
40b2920412 Remove signed Substrate TXs from Coordinator 2023-05-13 22:43:13 -04:00
Luke Parker
47f8766da6 Use proper messages for ValidatorSets/InInstructions pallet
Provides a DST, and associated metadata as beneficial.

Also utilizes MuSig's context to session-bind. Since set_keys_messages also
binds to set, this is semi-redundant, yet that's appreciated.
2023-05-13 04:40:16 -04:00
Luke Parker
663b5f4b50 Add a context to MuSig key aggregation 2023-05-13 04:04:14 -04:00
Luke Parker
227176e4b8 Correct various no_std definitions 2023-05-13 04:03:56 -04:00
Luke Parker
f069567f12 Use a MuSig signature to publish validator set key pairs to Serai
The processor/coordinator flow still has to be rewritten.
2023-05-13 02:15:41 -04:00
Luke Parker
84c2d73093 Do the minimal amount of work for dkg to compile under no-std
The Substrate runtime requires access to the MuSig key aggregation function.

\#279 related.
2023-05-12 23:25:17 -04:00
Luke Parker
4d50b6892c Add a dedicated function to get a MuSig key 2023-05-11 03:21:54 -04:00
Luke Parker
3eade48a6f cargo update
Resolves a yanked crate and removes some duplicated dependencies.
2023-05-10 07:34:07 -04:00
Luke Parker
89974c529a Correct 2/3rds definitions throughout the codebase
The prior formula failed for some values, such as 20.
20 / 3 = 6, * 2 = 12, + 1 = 13. 13 is 65%, not >= 67.
2023-05-10 06:29:21 -04:00
Luke Parker
ffea02dfbf Implement MuSig key aggregation into DKG
Isn't spec compliant due to the lack of a spec to be compliant too.

Slight deviation from the paper by using a unique list instead of a multiset.

Closes #186, progresses #277.
2023-05-10 06:25:40 -04:00
Luke Parker
f55e9b40e6 Have coordinator publish batches to Substrate 2023-05-10 01:46:20 -04:00
Luke Parker
a70df6a449 Remove TODO about code de-duplication
It's infeasible to write a macro/function there. Does add a type alias which
makes things cleaner.
2023-05-10 01:19:01 -04:00
Luke Parker
168f2899f0 Create a vote transaction upon GeneratedKeyPair 2023-05-10 00:46:51 -04:00
Luke Parker
c95bdb6752 Properly get genesis for a Processor message 2023-05-09 23:51:05 -04:00
Luke Parker
88f0e89350 Ensure Tributary commits are minimal 2023-05-09 23:45:05 -04:00
Luke Parker
7b7ddbdd97 Move the coordinator to a n-processor design 2023-05-09 23:44:41 -04:00
Luke Parker
9175383e89 Spawn a new async task for each block message
This probably should be done with n-long lived tasks, one per Tributary. While
this may not be suitably performant long-term (potential DoS vector), this at
least resolves the halting concerns.
2023-05-09 17:05:33 -04:00
Luke Parker
029b6c53a1 Use U448 for Ed448 instead of U512 2023-05-09 04:12:13 -04:00
Luke Parker
219adc7657 Rename uid to intent 2023-05-08 22:21:41 -04:00
Luke Parker
964fdee175 Publish ExternablBlock/SubstrateBlock, delay *Preprocess until ID acknowledged
Adds a channel for the Tributary scanner to communicate when an ID has been
acknowledged.
2023-05-08 22:20:51 -04:00
Luke Parker
a7f2740dfb Correct Serai Dockerfile 2023-05-08 02:08:31 -04:00
Luke Parker
0c9c1aeff1 Correct processor's handling of the new Monero RPC code 2023-05-02 03:40:49 -04:00
Luke Parker
adfbde6e24 Support arbitrary RPC providers in monero-serai
Sets a clean path for no-std premised RPCs (buffers to an external RPC impl)/
Tor-based RPCs/client-side load balancing/...
2023-05-02 02:39:08 -04:00
Luke Parker
5765d1d278 Update to May's nightly
Doesn't use the PR due to the needed changes.
2023-05-01 04:58:50 -04:00
Luke Parker
78c00bde3d Correct error message in ff-group-tests 2023-05-01 03:18:11 -04:00
Luke Parker
c0001f5ff2 Update to substrate polkadot-v0.9.42 2023-05-01 03:17:37 -04:00
Luke Parker
6032af6692 Have Coordinator MainDb take a mutable borrow 2023-04-26 00:10:06 -04:00
Luke Parker
7824b6cb8b Document the processor/tributary/coordinator/serai flow 2023-04-25 15:05:58 -04:00
Luke Parker
78d5372fb7 Initial code to handle messages from processors 2023-04-25 03:14:42 -04:00
Luke Parker
cc531d630e Add a UID function to messages
When we receive messages, we're provided with a message ID we can use to
prevent handling an item multiple times. That doesn't prevent us from *sending*
an item multiple times though. Thanks to the UID system, we can now not send if
already present.

Alternatively, we can remove the ordered message ID for just the UID, allowing
duplicates to be sent without issue, and handled on the receiving end.
2023-04-25 02:46:18 -04:00
Luke Parker
09d96822ca Document a pair of panics requiring 256 GB of RAM/4 GB of a context 2023-04-24 23:49:06 -04:00
Luke Parker
7a8f8c2d3d Document panic in FROST 2023-04-24 23:19:23 -04:00
Luke Parker
e74b4ab94f Add a TributaryReader which doesn't require a borrow to operate
Reduces lock contention.

Additionally changes block_key to include the genesis. While not technically
needed, the lack of genesis introduced a side effect where any Tributary on the
the database could return the block of any other Tributary. While that wasn't a
security issue, returning it suggested it was on-chain when it wasn't. This may
have been usable to create issues.
2023-04-24 07:02:00 -04:00
Luke Parker
e0820759c0 Tweak tests workflow 2023-04-24 06:16:43 -04:00
Luke Parker
2feebe536e Test handle_p2p and Tributary syncing
Includes bug fixes.
2023-04-24 03:30:19 -04:00
Luke Parker
cc491ee1e1 Don't return from sync_block until the Tendermint machine returns if it's valid or not
We had a race condition where'd we be informed of blocks 1 .. 3, and
immediately add 1 .. 3. Because we immediately tried to add 2 after 1, it'd
fail since the tip was still the genesis, yet 2 needs the tip to be 1.

Adding a channel, while ugly, was the simplest way to accomplish this.

Also has any added block be broadcasted. Else there's a race condition where a
node which syncs up to the most recent block does so, yet fails to add the next
block when it's committed to.
2023-04-24 02:46:13 -04:00
Luke Parker
14388e746c Implement Tributary syncing
Also adds a forwards-lookup to the Tributary blockchain.
2023-04-24 00:53:18 -04:00
Luke Parker
215155f84b Remove reliance on a blockchain read lock from block/commit 2023-04-23 23:51:10 -04:00
Luke Parker
c476f9b640 Break coordinator main into multiple functions
Also moves from std::sync::RwLock to tokio::sync::RwLock to prevent wasting
cycles on spinning.
2023-04-23 23:15:15 -04:00
Luke Parker
be8c25aef0 Move json word lists to rs
Allows building the seed code without serde_json.
2023-04-23 22:26:05 -04:00
Luke Parker
fb296a9c2e cargo update 2023-04-23 19:21:24 -04:00
Luke Parker
aa0ec4ac41 cargo fmt 2023-04-23 18:56:48 -04:00
Luke Parker
05b1fc5f05 Send a heartbeat message when a Tributary falls behind 2023-04-23 18:55:43 -04:00
Luke Parker
72633d6421 Clarify Arc RwLocks and sleeps in coordinator 2023-04-23 18:29:50 -04:00
Luke Parker
ad5522d854 Start handling P2P messages
This defines the tart of a very complex series of locks I'm really unhappy
with. At the same time, there's not immediately a better solution. This also
should work without issue.
2023-04-23 17:01:30 -04:00
Luke Parker
f2d9d70068 Reload Tributaries
add_active_tributary writes the spec to disk before it returns, so even if the
VecDeque it pushes to isn't popped, the tributary will still be loaded on boot.
2023-04-23 04:31:00 -04:00
Luke Parker
2b09309adc Handle adding new Tributaries
Removes last_block as an argument from Tendermint. It now loads from the DB as
needed. While slightly less performant, it's easiest and should be fine.
2023-04-23 03:51:26 -04:00
Luke Parker
bf9ec410db Additionally test DKGShares 2023-04-23 02:18:46 -04:00
Luke Parker
e0dc5d29ad Tributary test wait_for_tx_inclusion function 2023-04-23 01:52:19 -04:00
Luke Parker
710e6e5217 Add Transaction::sign.
While I don't love the introduction of empty_signed, it's practically fine.
2023-04-23 01:25:45 -04:00
Luke Parker
3f6565588f Test handling of DKG commitments transactions 2023-04-23 01:00:46 -04:00
Luke Parker
af84b7f707 Add a test for Tributary
Further fleshes out the Tributary testing code.
2023-04-22 22:28:20 -04:00
Luke Parker
8c74576cf0 Add a test to the coordinator for running a Tributary
Impls a LocalP2p for testing.

Moves rebroadcasting into Tendermint, since it's what knows if a message is
fully valid + original.

Removes TributarySpec::validators() HashMap, as its non-determinism caused
different instances to have different round robin schedules. It was already
prior moved to a Vec for this issue, so I'm unsure why this remnant existed.

Also renames the GH no-std workflow from the prior commit.
2023-04-22 10:49:52 -04:00
Luke Parker
1e448dec21 Add no_std support to transcript, dalek-ff-group, ed448, ciphersuite, multiexp, schnorr, and monero-generators
transcript, dalek-ff-group, ed449, and ciphersuite are all usable with no_std
alone. The rest additionally require alloc.

Part of #279.
2023-04-22 04:38:47 -04:00
Luke Parker
ef0c901455 Add recent bloat checks added to signer to substrate_signer as well 2023-04-20 15:45:32 -04:00
Luke Parker
09c3c9cc9e Route the SubstrateBlock message, which is the last Tributary transaction type 2023-04-20 15:37:22 -04:00
Luke Parker
a404944b90 Add a SubstrateBlockAck message to the processor
When a Substrate block occurs, the coordinator is expected to emit
SubstrateBlock. This causes the processor to begin a variety of plans. The
processor now emits SubstrateBlockAck, explicitly listing all plan IDs, before
starting signing.

This lets the coordinator provide a SubstrateBlock transaction, and with it,
recognize all plan IDs as valid.

Prior, we would've had to have a spotty algorithm based upon the upcoming
Preprocess messages, or if we immediately provided the SubstrateBlock
transaction, then wait for the processor to inform us of the contained plans.

This creates an explicitly proper async flow not reliant on waiting for data
availability.

Alternatively, we could've replaced Preprocess with (Block, Vec<Preprocess>).
This would've been more efficient, yet also clunky due to the multiple usages
of the Preprocess message.
2023-04-20 15:26:22 -04:00
Luke Parker
70d866af6a ExternalBlock handler 2023-04-20 14:51:33 -04:00
Luke Parker
f99a91b34d Slash on unrecognized ID 2023-04-20 14:33:19 -04:00
Luke Parker
294ad08e00 Add support for multiple orderings in Provided
Necessary as our Tributary chains needed to agree when a Serai block has
occurred, and when a Monero block has occurred. Since those could happen at the
same time, some validators may put SeraiBlock before ExternalBlock and vice
versa, causing a chain halt. Now they can have distinct ordering queues.
2023-04-20 07:32:40 -04:00
Luke Parker
a26ca1a92f Split FinalizedBlock into ExternalBlock and SeraiBlock
Also re-arranges their orders.
2023-04-20 06:59:42 -04:00
Luke Parker
9c2a44f9df Apply DKG TX handling code to all sign TXs
The existing code was almost entirely applicable. It just needed to be scoped
with an ID. While the handle function is now a bit convoluted, I don't see a
better option.
2023-04-20 06:27:05 -04:00
Luke Parker
8b5eaa8092 Add additional checks to key_gen/sign
There is the ability to cause state bloat by flooding Tributary.
KeyGen/Sign specifically shouldn't allow bloat since we check the
commitments/preprocesses/shares for validity. Accordingly, any invalid data
(such as bloat) should be detected.

It was posssible to place bloat after the valid data. Doing so would be
considered a valid KeyGen/Sign message, yet could add up to 50k kB per sign.
2023-04-20 05:36:48 -04:00
Luke Parker
8041a0d845 Initial Tributary handling 2023-04-20 05:05:17 -04:00
Luke Parker
9e1f3fc85c Make MainDB into SubstrateDB 2023-04-20 05:04:08 -04:00
Luke Parker
ee65e4df8f Resolve #68
Notably speeds up monero-serai's build and CLSAG performance.
2023-04-20 01:18:16 -04:00
Luke Parker
ff2febe5aa Move the entirety of ed448 to Residue, offering a further 2-4x speedup 2023-04-19 04:02:59 -04:00
Luke Parker
334873b6a5 Use crypto-bigint's reduction in ed448
Achieves feasible performance in the ed448 which makes it potentially viable
for real world usage.

Accordingly prepares a new release, updating the README.
2023-04-19 03:35:57 -04:00
Luke Parker
21026136bd Save keys by their tweaked group_key
Keys are referred to by their tweaked versions. If a tweak was needed, keys
would fail to confirm.
2023-04-18 14:55:15 -04:00
Luke Parker
396e5322b4 Code a method to determine the activation block before any block has consensus
[0; 32] is a magic for no block has been set yet due to this being the first
key pair. If [0; 32] is the latest finalized block, the processor determines
an activation block based on timestamps.

This doesn't use an Option for ergonomic reasons.
2023-04-18 03:04:52 -04:00
Luke Parker
9da0eb69c7 Use an enum for Coin/NetworkId
It originally wasn't an enum so software which had yet to update before an
integration wouldn't error (as now enums are strictly typed). The strict typing
is preferable though.
2023-04-18 02:04:47 -04:00
Luke Parker
6f3b5f4535 Tweak ConfirmKeyPair to alleviate database requirements of coordinator 2023-04-18 01:09:22 -04:00
Luke Parker
e880ebb5a9 Clarify safety of Scanner::block_number and KeyGen::keys 2023-04-18 00:26:19 -04:00
Luke Parker
1036e673ce cargo update to remove usage of yanked crate 2023-04-17 23:59:04 -04:00
Luke Parker
fd1bbec134 Use a single txn for an entire coordinator message
Removes direct DB accesses whre possible. Documents the safety of the rest.
Does uncover one case of unsafety not previously noted.
2023-04-17 23:55:12 -04:00
Luke Parker
7579c71765 Add note to processor_messages 2023-04-17 23:11:44 -04:00
Luke Parker
5a499de4ca Remove BatchSigned
SubstrateBlock's provision of the most recently acknowledged block has
equivalent information with the same latency. Accordingly, there's no need for
it.
2023-04-17 20:19:15 -04:00
Luke Parker
e26b861d25 Move ConfirmKeyPair from key_gen to substrate
Clarifies the emitter and accordingly why its mutations are justified.
2023-04-17 19:40:17 -04:00
Luke Parker
059e79c98a Add extensive commentary on mutable to the processor's main file
Clearly establishes why consistency is guaranteed from a Rust borrow-checker
mindset. While there are plenty of... 'violations', they're clearly explained.

Hopefully, this method of thinking helps promote/ensure consistency in the
future.
2023-04-17 19:24:02 -04:00
Luke Parker
92a868e574 Add a processor API to the coordinator 2023-04-17 02:10:33 -04:00
Luke Parker
595cd6d404 Rename transaction file to tributary, add function for genesis 2023-04-17 02:09:29 -04:00
Luke Parker
4d43c04916 Clean up the Substrate block processing code 2023-04-17 00:50:56 -04:00
Luke Parker
2604746586 Fill out code for the rest of the Substrate events 2023-04-16 03:18:52 -04:00
Luke Parker
36cdf6d4bf Have InInstructions track the latest block for a network in storage 2023-04-16 02:57:19 -04:00
Luke Parker
9676584ffe Resolve #245 2023-04-16 01:03:32 -04:00
Luke Parker
79655672ef Make progres on handling NewSet events
Further bones out the coordinator.
2023-04-16 00:51:56 -04:00
Luke Parker
fa2cf03e61 Support extracting timestamps from blocks 2023-04-16 00:31:54 -04:00
Luke Parker
92ad689c7e cargo update
Since p256 now pulls in an extra crate with this update, the {k,p}256 imports
disable default-features to prevent growing the tree.
2023-04-15 23:21:18 -04:00
Luke Parker
b2169a7316 cargo +nightly fmt 2023-04-15 23:09:39 -04:00
Luke Parker
e2571a43aa Correct processor flow to have the coordinator decide signing set/re-attempts
The signing set should be the first group to submit preprocesses to Tributary.
Re-attempts shouldn't be once every 30s, yet n blocks since the last relevant
message.

Removes the use of an async task/channel in the signer (and Substrate signer).
Also removes the need to be able to get the time from a coin's block, which was
a fragile system marked with a TODO already.
2023-04-15 23:01:07 -04:00
Luke Parker
e21fc5ff3c Merge AckBlock with Burns
Offers greater efficiency while reducing concerns re: atomicity.
2023-04-15 18:38:40 -04:00
Luke Parker
eafd054296 Start defining the coordinator 2023-04-15 17:38:47 -04:00
Luke Parker
51bf51ae1e Make unsigned private due to unsafe calling potential 2023-04-15 16:37:59 -04:00
Luke Parker
28b6bc99ac Update to the latest subxt
Writes a custom unsigned extrinic creator due to subxt having an internal error
with the scale metadata. While the code in our scope increased, it's much more
ergonomic to our usage. We may end up rewriting most of subxt, eventually.
2023-04-15 05:23:57 -04:00
Luke Parker
ce883104b7 cargo update 2023-04-15 03:27:45 -04:00
Luke Parker
f48022c6eb Add basic getters to tributary 2023-04-15 00:41:48 -04:00
Luke Parker
124b994c23 Add a NewSet event to validator-sets
Updates to the latest serai-dex/substrate due to depending on
10ccaca0eb498a2316bbf627d419b29b1a75933a.
2023-04-15 00:40:33 -04:00
Luke Parker
2e2bc59703 Support reloading the mempool from disk 2023-04-14 15:51:56 -04:00
Luke Parker
c032f66f8a must_use annotations on DbTxn 2023-04-14 15:04:26 -04:00
Luke Parker
695d923593 Reloaded provided transactions from the disk
Also resolves a race condition by asserting provided transactions must be
unique, allowing them to be safely provided multiple times.
2023-04-14 15:03:01 -04:00
Luke Parker
63318cb728 Add a DB to Tributary
Adds support for reloading most of the blockchain.
2023-04-14 14:11:40 -04:00
Luke Parker
6f6c9f7cdf Add a dedicated db crate with a basic DB trait
It's needed by the processor and tributary (coordinator).
2023-04-14 11:47:43 -04:00
Luke Parker
04e7863dbd Documentation and cargo update 2023-04-13 21:06:11 -04:00
Luke Parker
a5002c50ec Fix the scheduler from dropping UTXOs when there weren't any payments 2023-04-13 20:59:36 -04:00
Luke Parker
72dd665ebf Add DoS limits to tributary and require provided transactions be ordered 2023-04-13 20:35:55 -04:00
Luke Parker
8b1bce6abd Add correction the last commit missed 2023-04-13 18:47:34 -04:00
Luke Parker
e73a51bfa5 Finish binding Tendermint into Tributary and define a Tributary master object 2023-04-13 18:43:27 -04:00
Luke Parker
5858b6c03e Replace Tendermint step with sync_block
Step moved a step forward after an externally synced/added block. This created
a race condition to add the block between the sync process and the Tendermint
machine. Now that the block routes through Tendermint, there is no such race
condition.
2023-04-13 18:18:29 -04:00
Luke Parker
9bea368d36 Plan scheduled payments whenever outputs are received
The scheduler prior waited for the next series of payments to be added.
2023-04-13 15:41:56 -04:00
Luke Parker
a509dbfad6 Embed the mempool into the Blockchain 2023-04-13 09:47:14 -04:00
Luke Parker
03a6470a5b Finish binding Tendermint, bar the P2P layer 2023-04-12 18:04:28 -04:00
Luke Parker
997dd611d5 Don't add blocks which aren't valid
Previously, Tendermint needed to be live more than it needed to be correct.
Under the original intention for it, correctness would fail if any coin
desynced, which would cause the node to fail entirely. By accepting a
supermajority's view of state, despite its own, a single coin's failure would
only lead to inability to participate with that single coin.

Now that Tendermint is solely for Tributary, nodes should halt a coin-specific
chain if their view of the chain differs. They are unable to meaningless
participate regardless.

This also means a supermajority of validators can no longer fake messages from
other validators, allowing the Tributary chain to use uniform weights with much
less impact. There is still enough impact they can't be used (ability to cause
a fork), yet they should allow uniform block production (as that's solely a DoS
concern).

While we prior could've simply additionally checked signatures, add_block's
lack of a failure case would've meant it had to panic. This would've been a DoS
possible a minority-weight *which affected the entire coordinator* and
therefore *the entire validator for all coins*.
2023-04-12 16:18:42 -04:00
Luke Parker
86cbf6e02e Bind the signature scheme for tendermint-machine 2023-04-12 16:06:14 -04:00
Luke Parker
8c8232516d Only allow designated participants to send transactions 2023-04-12 12:42:23 -04:00
Luke Parker
be947ce152 Add a mempool 2023-04-12 12:15:38 -04:00
Luke Parker
7c7f17aac6 Test the blockchain 2023-04-12 11:13:48 -04:00
Luke Parker
ff5c240fcc Fix a bug in the merkle algorithm 2023-04-12 10:52:28 -04:00
Luke Parker
d5a12a9b97 Make TransactionKind have a reference to Signed
Broken commit due to partial staging of one file.
2023-04-12 09:38:20 -04:00
Luke Parker
354ac856a5 Extensively test transactions 2023-04-12 08:51:40 -04:00
Luke Parker
402a7be966 Block contructor and tests 2023-04-11 20:24:27 -04:00
Luke Parker
119d25be49 Clarify transaction length sizing 2023-04-11 19:18:26 -04:00
Luke Parker
2cfee536f6 Define all coordinator transaction types 2023-04-11 19:04:53 -04:00
Luke Parker
90f67b5e54 Slight merkle improvements 2023-04-11 19:04:27 -04:00
Luke Parker
4d17b922fe Sign the genesis when signing transactions
Prevents replaying across tributaries, which is a risk for BTC/ETH (regarding key gen).
2023-04-11 19:03:52 -04:00
Luke Parker
7488d23e0d Add basic transaction/block code to Tributary 2023-04-11 13:42:18 -04:00
Luke Parker
a290b74805 Tweak processor's slice handling due to a CI failure
The prior code worked without issue for me locally, but apparently it didn't
always.
2023-04-11 10:37:50 -04:00
Luke Parker
61757d5e19 Remove the substrate feature from tendermint 2023-04-11 10:34:41 -04:00
Luke Parker
09f8ac37c4 Create a folder for tributary, the micro-blockchain
Moves tendermint again, this time under tributary.
2023-04-11 10:18:31 -04:00
Luke Parker
c46cf47736 Move tendermint under the coordinator
We're planning to use it in the micro-blockchain the coordinator will run.
2023-04-11 09:28:32 -04:00
Luke Parker
defce32ff1 Remove k256/p256 git revision patch
New releases of k256 and p256 make it no longer necessary.
2023-04-11 09:23:57 -04:00
Luke Parker
de52c4db7f Add empty coordinator 2023-04-11 09:21:35 -04:00
Luke Parker
d74cbe2cce Have the Scanner assign batch IDs 2023-04-11 08:47:15 -04:00
Luke Parker
caa695511b Improve log statements in processor 2023-04-11 06:06:17 -04:00
Luke Parker
7538c10159 Update processor README 2023-04-11 05:53:19 -04:00
Luke Parker
90f2b03595 Finish routing eventualities
Also corrects some misc TODOs and tidies up some log statements.
2023-04-11 05:49:27 -04:00
Luke Parker
9e78c8fc9e Test the processor's Substrate signer 2023-04-10 12:48:48 -04:00
Luke Parker
d323fc8b7b Handle signing batches in the processor
Duplicates the existing signer for one tailored to batch signing.
2023-04-10 11:11:46 -04:00
Luke Parker
82c34dcc76 Implement a FROST variant of Schnorrkel (#274)
* Minor lint

* Update frost-schnorrkel to the latest modular-frost

* Tidy up the schnorrkel library
2023-04-10 06:05:17 -04:00
Luke Parker
bc19975a8a Update Bitcoin confirmations from 3 to 6
While Bitcoin practically doesn't have long re-orgs, it is possible for a
single miner to build a long chain. Recently, a miner found 5 blocks in a row,
which would be enough to re-org a transaction Serai considered finalized.
2023-04-10 02:51:44 -04:00
Luke Parker
b9f38fb354 Update processor message flow around the new SignedBatch flow 2023-04-10 02:51:36 -04:00
Luke Parker
ccec529cee cargo update
Removes yanked crate.
2023-04-10 00:27:17 -04:00
Luke Parker
1c31ca7187 Correct deny.toml 2023-04-09 02:34:31 -04:00
Luke Parker
f6206b60ec Update to bitcoin 0.30
Also performs a general update with a variety of upgraded Substrate depends.
2023-04-09 02:31:13 -04:00
Luke Parker
96525330c2 cargo update 2023-04-08 04:44:28 -04:00
Luke Parker
7abc8f19cd Move substrate/serai/* to substrate/* 2023-04-08 03:01:14 -04:00
GitHub Actions
bd06b95c05 Update nightly 2023-04-01 05:44:42 -04:00
Luke Parker
648d237df5 Finish updating to the latest Rust/handle broken cargo update 2023-04-01 05:44:18 -04:00
Luke Parker
3f4bab7f7b cargo update
Resolves yanked crate.
2023-03-31 23:15:32 -04:00
Luke Parker
426346dd5a Have the processor DKG output a Ristretto key
This will be used to sign InInstructions.
2023-03-31 10:15:07 -04:00
Luke Parker
a4f64e2651 Expand work done in provide_batch 2023-03-31 08:13:45 -04:00
Luke Parker
6fa405a728 Update Monero README 2023-03-31 07:02:57 -04:00
Luke Parker
ae4e98c052 Verify Batch signatures
Starts further fleshing out the Serai client tests with common utils.
2023-03-31 06:34:09 -04:00
Luke Parker
30b8636641 Update to the latest Substrate commit
Enables building with only the stable toolchain. The nightly toolchain is still
used for clippy in order to access additional checks.
2023-03-31 02:34:52 -04:00
Luke Parker
1610383649 Test validator set's voting on a key
Needed for the in-instructions pallet to verify in-instructions are
appropriately signed and continue developing that.

Fixes a bug in the validator-sets pallet, moves several items from the pallet
to primitives.
2023-03-30 20:32:05 -04:00
Luke Parker
9615caf3bb Move validator-sets from Key to (RistrettoPublic, Key)
Part of #241.
2023-03-30 20:32:05 -04:00
Luke Parker
8a70416fd0 Attempt to the fix Bitcoin CI's cache statement 2023-03-28 07:33:11 -04:00
Luke Parker
4f28a38ce1 Implement #212 2023-03-28 05:34:21 -04:00
Luke Parker
47be373eb0 Resolve #268 by adding a Zeroize to DigestTranscript which writes a full block
This is a 'better-than-nothing' attempt to invalidate its state.

Also replaces black_box features with usage of the rustversion crate.
2023-03-28 04:43:10 -04:00
Luke Parker
79aff5d4c8 ff 0.13 (#269)
* Partial move to ff 0.13

It turns out the newly released k256 0.12 isn't on ff 0.13, preventing further
work at this time.

* Update all crates to work on ff 0.13

The provided curves still need to be expanded to fit the new API.

* Finish adding dalek-ff-group ff 0.13 constants

* Correct FieldElement::product definition

Also stops exporting macros.

* Test most new parts of ff 0.13

* Additionally test ff-group-tests with BLS12-381 and the pasta curves

We only tested curves from RustCrypto. Now we test a curve offered by zk-crypto,
the group behind ff/group, and the pasta curves, which is by Zcash (though
Zcash developers are also behind zk-crypto).

* Finish Ed448

Fully specifies all constants, passes all tests in ff-group-tests, and finishes moving to ff-0.13.

* Add RustCrypto/elliptic-curves to allowed git repos

Needed due to k256/p256 incorrectly defining product.

* Finish writing ff 0.13 tests

* Add additional comments to dalek

* Further comments

* Update ethereum-serai to ff 0.13
2023-03-28 04:38:01 -04:00
Luke Parker
a9f6300e86 Remove unused dependencies from runtime/node 2023-03-26 23:10:16 -04:00
Luke Parker
ff70cbb223 Remove the genesis volume added for Tendermint 2023-03-26 20:20:32 -04:00
Luke Parker
17818c2a02 Default to the wasm executor
https://github.com/paritytech/substrate/issues/ 10579 has the rationale for this.
2023-03-26 18:57:49 -04:00
Luke Parker
aea6ac104f Remove Tendermint for GRANDPA
Updates to polkadot-v0.9.40, with a variety of dependency updates accordingly.
Substrate thankfully now uses k256 0.13, pathing the way for #256. We couldn't
upgrade to polkadot-v0.9.40 without this due to polkadot-v0.9.40 having
fundamental changes to syncing. While we could've updated tendermint, it's not
worth the continued development effort given its inability to work with
multiple validator sets.

Purges sc-tendermint. Keeps tendermint-machine for #163.

Closes #137, #148, #157, #171. #96 and #99 should be re-scoped/clarified. #134
and #159 also should be clarified. #169 is also no longer a priority since
we're only considering temporal deployments of tendermint. #170 also isn't
since we're looking at effectively sharded validator sets, so there should
be no singular large set needing high performance.
2023-03-26 16:49:18 -04:00
Luke Parker
534e1bb11d Fix Monero's Extra::fee_weight and handling of data limits 2023-03-26 03:43:51 -04:00
Luke Parker
c182b804bc Move in instructions from inherent transactions to unsigned transactions
The original intent was to use inherent transactions to prevent needing to vote
on-chain, which would spam the chain with worthless votes. Inherent
transactions, and our Tendermint library, would use the BFT's processs voting
to also vote on all included transactions. This perfectly collapses integrity
voting creating *no additional on-chain costs*.

Unfortunately, this led to issues such as #6, along with questions of validator
scalability when all validators are expencted to participate in consensus (in
order to vote on if the included instructions are valid). This has been
summarized in #241.

With this change, we can remove Tendermint from Substrate. This greatly
decreases our complexity. While I'm unhappy with the amount of time spent on
it, just to reach this conclusion, thankfully tendermint-machine itself is
still usable for #163. This also has reached a tipping point recently as the
polkadot-v0.9.40 branch of substrate changed how syncing works, requiring
further changes to sc-tendermint. These have no value if we're just going to
get rid of it later, due to fundamental design issues, yet I would like to
keep Substrate updated.

This should be followed by moving back to GRANDPA, enabling closing most open
Tendermint issues.

Please note the current in-instructions-pallet does not actually verify the
included signature yet. It's marked TODO, despite this bing critical.
2023-03-26 02:58:04 -04:00
Luke Parker
9157f8d0a0 Update procesor/correct prior commit 2023-03-25 04:06:25 -04:00
Luke Parker
839734354a Update Getting Started 2023-03-25 01:44:07 -04:00
Luke Parker
d954e67238 Ensure InInstruction data is properly limited
Bitcoin didn't check, assuming data was <= 80 bytes thanks to being in
OP_RETURN. An additional global check has been added.
2023-03-25 01:36:28 -04:00
Luke Parker
6a981dae6e Make Validator Set Network a first-class property
There already should only be one validator set operating per network. This
formalizes that. Then, validator sets used to be able to operate over multiple
networks. That is no longer possible.

This formalization increases validator set flexibility while also allowing the
ability to formalize the definiton of tokens (which is necessary to define a
gas asset).
2023-03-25 01:30:53 -04:00
Luke Parker
397d79040c Update monero-serai to limit the size of TX extra 2023-03-25 01:26:42 -04:00
Luke Parker
293731f739 cargo update 2023-03-25 00:45:33 -04:00
Luke Parker
8447021ba1 Add a way to check if blocks completed eventualities 2023-03-22 22:45:41 -04:00
Luke Parker
11a0803ea5 Make the bitcoin Algorithm test a unit test 2023-03-21 18:50:23 -04:00
Luke Parker
d58a7b0ebf cargo fmt 2023-03-20 20:43:52 -04:00
Luke Parker
952cf280c2 Bump crate versions 2023-03-20 20:34:41 -04:00
Luke Parker
8d4d630e0f Fully document crypto/ 2023-03-20 20:10:00 -04:00
Luke Parker
e1bb2c191b Correct audit file upload
Git had replaced the 'line endings'.
2023-03-20 17:35:45 -04:00
Luke Parker
df2bb79a53 Clarify further changes have not been audited 2023-03-20 16:24:04 -04:00
Luke Parker
515587406f Finish testing bitcoin-serai 2023-03-20 05:47:07 -04:00
Luke Parker
7fc8630d39 Test bitcoin-serai
Also resolves a few rough edges.
2023-03-20 04:46:27 -04:00
Luke Parker
6a2a353b91 cargo fmt 2023-03-20 01:12:09 -04:00
Luke Parker
66eaf6ab61 Remove the code for the CI to spawn a Serai node
The serai-client test runner controls the node on its end.

Also bumps the Monero version.
2023-03-20 01:07:43 -04:00
Luke Parker
597122b2e0 Add a Scanner to bitcoin-serai
Moves the processor to it. This ends up as a net-neutral LoC change to the
processor, unfortunately, yet this makes bitcoin-serai safer/easier to use, and
increases the processor's usage of bitcoin-serai.

Also re-organizes bitcoin-serai a bit.
2023-03-20 01:03:39 -04:00
Luke Parker
0aa6b561b7 Bitcoin SpendableOutput::new 2023-03-19 23:22:56 -04:00
Luke Parker
60ca3a9599 Have SeraiError::RpcError include the subxt Error
Part of debugging #262.
2023-03-19 22:02:30 -04:00
Luke Parker
59891594aa Fix processor's determionation of protocol to support integration tests
I'm really unhappy with a cfg(test) within the codebase. The double checking of
it makes it tolerable though, especially when compared to dropping these tests.
2023-03-19 21:05:13 -04:00
Luke Parker
2fdf8f8285 Remove unused import 2023-03-17 23:59:46 -04:00
Luke Parker
55e0253225 Again tweak timeouts for #260 2023-03-17 23:45:46 -04:00
Luke Parker
918cce3494 Add a proper error to Bitcoin's SignableTransaction::new
Also adds documentation to various parts of bitcoin.
2023-03-17 23:43:32 -04:00
Luke Parker
6ac570365f Fix #260
The issue was the 10s timeouts were too fast for the CI runner, since the
Scanner only polls every five seconds (already cutting into the window).
2023-03-17 21:38:09 -04:00
Luke Parker
0525ba2f62 Document Bitcoin RPC and make it more robust 2023-03-17 21:25:38 -04:00
Luke Parker
9b47ad56bb Create a dedicated Algorithm for Bitcoin Schnorr 2023-03-17 20:47:42 -04:00
Luke Parker
5e771b1bea Run bitcoin as daemon 2023-03-17 15:32:05 -04:00
Luke Parker
9952c67d98 Update crypto-bigint to 0.5 2023-03-17 15:31:04 -04:00
Luke Parker
f2218b4d4e Attempt to fix Bitcoin node CI 2023-03-17 13:37:18 -04:00
Luke Parker
780b79c3d8 Properly run processor Monero tests
Since it wasn't being compiled with the Monero feature, it wasn't running the
Monero tests.
2023-03-17 13:33:50 -04:00
Luke Parker
ba82dac18c Processor (#259)
* Initial work on a message box

* Finish message-box (untested)

* Expand documentation

* Embed the recipient in the signature challenge

Prevents a message from A -> B from being read as from A -> C.

* Update documentation by bifurcating sender/receiver

* Panic on receiving an invalid signature

If we've received an invalid signature in an authenticated system, a 
service is malicious, critically faulty (equivalent to malicious), or 
the message layer has been compromised (or is otherwise critically 
faulty).

Please note a receiver who handles a message they shouldn't will trigger 
this. That falls under being critically faulty.

* Documentation and helper methods

SecureMessage::new and SecureMessage::serialize.

Secure Debug for MessageBox.

* Have SecureMessage not be serialized by default

Allows passing around in-memory, if desired, and moves the error from 
decrypt to new (which performs deserialization).

Decrypt no longer has an error since it panics if given an invalid 
signature, due to this being intranet code.

* Explain and improve nonce handling

Includes a missing zeroize call.

* Rebase to latest develop

Updates to transcript 0.2.0.

* Add a test for the MessageBox

* Export PrivateKey and PublicKey

* Also test serialization

* Add a key_gen binary to message_box

* Have SecureMessage support Serde

* Add encrypt_to_bytes and decrypt_from_bytes

* Support String ser via base64

* Rename encrypt/decrypt to encrypt_bytes/decrypt_to_bytes

* Directly operate with values supporting Borsh

* Use bincode instead of Borsh

By staying inside of serde, we'll support many more structs. While 
bincode isn't canonical, we don't need canonicity on an authenticated, 
internal system.

* Turn PrivateKey, PublicKey into structs

Uses Zeroizing for the PrivateKey per #150.

* from_string functions intended for loading from an env

* Use &str for PublicKey from_string (now from_str)

The PrivateKey takes the String to take ownership of its memory and 
zeroize it. That isn't needed with PublicKeys.

* Finish updating from develop

* Resolve warning

* Use ZeroizingAlloc on the key_gen binary

* Move message-box from crypto/ to common/

* Move key serialization functions to ser

* add/remove functions in MessageBox

* Implement Hash on dalek_ff_group Points

* Make MessageBox generic to its key

Exposes a &'static str variant for internal use and a RistrettoPoint 
variant for external use.

* Add Private to_string as deprecated

Stub before more competent tooling is deployed.

* Private to_public

* Test both Internal and External MessageBox, only use PublicKey in the pub API

* Remove panics on invalid signatures

Leftover from when this was solely internal which is now unsafe.

* Chicken scratch a Scanner task

* Add a write function to the DKG library

Enables writing directly to a file.

Also modifies serialize to return Zeroizing<Vec<u8>> instead of just Vec<u8>.

* Make dkg::encryption pub

* Remove encryption from MessageBox

* Use a 64-bit block number in Substrate

We use a 64-bit block number in general since u32 only works for 120 years
(with a 1 second block time). As some chains even push the 1 second threshold,
especially ones based on DAG consensus, this becomes potentially as low as 60
years.

While that should still be plenty, it's not worth wondering/debating. Since
Serai uses 64-bit block numbers elsewhere, this ensures consistency.

* Misc crypto lints

* Get the scanner scratch to compile

* Initial scanner test

* First few lines of scheduler

* Further work on scheduler, solidify API

* Define Scheduler TX format

* Branch creation algorithm

* Document when the branch algorithm isn't perfect

* Only scanned confirmed blocks

* Document Coin

* Remove Canonical/ChainNumber from processor

The processor should be abstracted from canonical numbers thanks to the
coordinator, making this unnecessary.

* Add README documenting processor flow

* Use Zeroize on substrate primitives

* Define messages from/to the processor

* Correct over-specified versioning

* Correct build re: in_instructions::primitives

* Debug/some serde in crypto/

* Use a struct for ValidatorSetInstance

* Add a processor key_gen task

Redos DB handling code.

* Replace trait + impl with wrapper struct

* Add a key confirmation flow to the key gen task

* Document concerns on key_gen

* Start on a signer task

* Add Send to FROST traits

* Move processor lib.rs to main.rs

Adds a dummy main to reduce clippy dead_code warnings.

* Further flesh out main.rs

* Move the DB trait to AsRef<[u8]>

* Signer task

* Remove a panic in bitcoin when there's insufficient funds

Unchecked underflow.

* Have Monero's mine_block mine one block, not 10

It was initially a nicety to deal with the 10 block lock. C::CONFIRMATIONS
should be used for that instead.

* Test signer

* Replace channel expects with log statements

The expects weren't problematic and had nicer code. They just clutter test
output.

* Remove the old wallet file

It predates the coordinator design and shouldn't be used.

* Rename tests/scan.rs to tests/scanner.rs

* Add a wallet test

Complements the recently removed wallet file by adding a test for the scanner,
scheduler, and signer together.

* Work on a run function

Triggers a clippy ICE.

* Resolve clippy ICE

The issue was the non-fully specified lambda in signer.

* Add KeyGenEvent and KeyGenOrder

Needed so we get KeyConfirmed messages from the key gen task.

While we could've read the CoordinatorMessage to see that, routing through the
key gen tasks ensures we only handle it once it's been successfully saved to
disk.

* Expand scanner test

* Clarify processor documentation

* Have the Scanner load keys on boot/save outputs to disk

* Use Vec<u8> for Block ID

Much more flexible.

* Panic if we see the same output multiple times

* Have the Scanner DB mark itself as corrupt when doing a multi-put

This REALLY should be a TX. Since we don't have a TX API right now, this at
least offers detection.

* Have DST'd DB keys accept AsRef<[u8]>

* Restore polling all signers

Writes a custom future to do so.

Also loads signers on boot using what the scanner claims are active keys.

* Schedule OutInstructions

Adds a data field to Payment.

Also cleans some dead code.

* Panic if we create an invalid transaction

Saves the TX once it's successfully signed so if we do panic, we have a copy.

* Route coordinator messages to their respective signer

Requires adding key to the SignId.

* Send SignTransaction orders for all plans

* Add a timer to retry sign_plans when prepare_send fails

* Minor fmt'ing

* Basic Fee API

* Move the change key into Plan

* Properly route activation_number

* Remove ScannerEvent::Block

It's not used under current designs

* Nicen logs

* Add utilities to get a block's number

* Have main issue AckBlock

Also has a few misc lints.

* Parse instructions out of outputs

* Tweak TODOs and remove an unwrap

* Update Bitcoin max input/output quantity

* Only read one piece of data from Monero

Due to output randomization, it's infeasible.

* Embed plan IDs into the TXs they create

We need to stop attempting signing if we've already signed a protocol. Ideally,
any one of the participating signers should be able to provide a proof the TX
was successfully signed. We can't just run a second signing protocol though as
a single malicious signer could complete the TX signature, and publish it,
yet not complete the secondary signature.

The TX itself has to be sufficient to show that the TX matches the plan. This
is done by embedding the ID, so matching addresses/amounts plans are
distinguished, and by allowing verification a TX actually matches a set of
addresses/amounts.

For Monero, this will need augmenting with the ephemeral keys (or usage of a
static seed for them).

* Don't use OP_RETURN to encode the plan ID on Bitcoin

We can use the inputs to distinguih identical-output plans without issue.

* Update OP_RETURN data access

It's not required to be the last output.

* Add Eventualities to Monero

An Eventuality is an effective equivalent to a SignableTransaction. That is
declared not by the inputs it spends, yet the outputs it creates.
Eventualities are also bound to a 32-byte RNG seed, enabling usage of a
hash-based identifier in a SignableTransaction, allowing multiple
SignableTransactions with the same output set to have different Eventualities.

In order to prevent triggering the burning bug, the RNG seed is hashed with
the planned-to-be-used inputs' output keys. While this does bind to them, it's
only loosely bound. The TX actually created may use different inputs entirely
if a forgery is crafted (which requires no brute forcing).

Binding to the key images would provide a strong binding, yet would require
knowing the key images, which requires active communication with the spend
key.

The purpose of this is so a multisig can identify if a Transaction the entire
group planned has been executed by a subset of the group or not. Once a plan
is created, it can have an Eventuality made. The Eventuality's extra is able
to be inserted into a HashMap, so all new on-chain transactions can be
trivially checked as potential candidates. Once a potential candidate is found,
a check involving ECC ops can be performed.

While this is arguably a DoS vector, the underlying Monero blockchain would
need to be spammed with transactions to trigger it. Accordingly, it becomes
a Monero blockchain DoS vector, when this code is written on the premise
of the Monero blockchain functioning. Accordingly, it is considered handled.

If a forgery does match, it must have created the exact same outputs the
multisig would've. Accordingly, it's argued the multisig shouldn't mind.

This entire suite of code is only necessary due to the lack of outgoing
view keys, yet it's able to avoid an interactive protocol to communicate
key images on every single received output.

While this could be locked to the multisig feature, there's no practical
benefit to doing so.

* Add support for encoding Monero address to instructions

* Move Serai's Monero address encoding into serai-client

serai-client is meant to be a single library enabling using Serai. While it was
originally written as an RPC client for Serai, apps actually using Serai will
primarily be sending transactions on connected networks. Sending those
transactions require proper {In, Out}Instructions, including proper address
encoding.

Not only has address encoding been moved, yet the subxt client is now behind
a feature. coin integrations have their own features, which are on by default.
primitives are always exposed.

* Reorganize file layout a bit, add feature flags to processor

* Tidy up ETH Dockerfile

* Add Bitcoin address encoding

* Move Bitcoin::Address to serai-client's

* Comment where tweaking needs to happen

* Add an API to check if a plan was completed in a specific TX

This allows any participating signer to submit the TX ID to prevent further
signing attempts.

Also performs some API cleanup.

* Minimize FROST dependencies

* Use a seeded RNG for key gen

* Tweak keys from Key gen

* Test proper usage of Branch/Change addresses

Adds a more descriptive error to an error case in decoys, and pads Monero
payments as needed.

* Also test spending the change output

* Add queued_plans to the Scheduler

queued_plans is for payments to be issued when an amount appears, yet the
amount is currently pre-fee. One the output is actually created, the
Scheduler should be notified of the amount it was created with, moving from
queued_plans to plans under the actual amount.

Also tightens debug_asserts to asserts for invariants which may are at risk of
being exclusive to prod.

* Add missing tweak_keys call

* Correct decoy selection height handling

* Add a few log statements to the scheduler

* Simplify test's get_block_number

* Simplify, while making more robust, branch address handling in Scheduler

* Have fees deducted from payments

Corrects Monero's handling of fees when there's no change address.

Adds a DUST variable, as needed due to 1_00_000_000 not being enough to pay
its fee on Monero.

* Add comment to Monero

* Consolidate BTC/XMR prepare_send code

These aren't fully consolidated. We'd need a SignableTransaction trait for
that. This is a lot cleaner though.

* Ban integrated addresses

The reasoning why is accordingly documented.

* Tidy TODOs/dust handling

* Update README TODO

* Use a determinisitic protocol version in Monero

* Test rebuilt KeyGen machines function as expected

* Use a more robust KeyGen entropy system

* Add DB TXNs

Also load entropy from env

* Add a loop for processing messages from substrate

Allows detecting if we're behind, and if so, waiting to handle the message

* Set Monero MAX_INPUTS properly

The previous number was based on an old hard fork. With the ring size having
increased, transactions have since got larger.

* Distinguish TODOs into TODO and TODO2s

TODO2s are for after protonet

* Zeroize secret share repr in ThresholdCore write

* Work on Eventualities

Adds serialization and stops signing when an eventuality is proven.

* Use a more robust DB key schema

* Update to {k, p}256 0.12

* cargo +nightly clippy

* cargo update

* Slight message-box tweaks

* Update to recent Monero merge

* Add a Coordinator trait for communication with coordinator

* Remove KeyGenHandle for just KeyGen

While KeyGen previously accepted instructions over a channel, this breaks the
ack flow needed for coordinator communication. Now, KeyGen is the direct object
with a handle() function for messages.

Thankfully, this ended up being rather trivial for KeyGen as it has no
background tasks.

* Add a handle function to Signer

Enables determining when it's finished handling a CoordinatorMessage and
therefore creating an acknowledgement.

* Save transactions used to complete eventualities

* Use a more intelligent sleep in the signer

* Emit SignedTransaction with the first ID *we can still get from our node*

* Move Substrate message handling into the new coordinator recv loop

* Add handle function to Scanner

* Remove the plans timer

Enables ensuring the ordring on the handling of plans.

* Remove the outputs function which panicked if a precondition wasn't met

The new API only returns outputs upon satisfaction of the precondition.

* Convert SignerOrder::SignTransaction to a function

* Remove the key_gen object from sign_plans

* Refactor out get_fee/prepare_send into dedicated functions

* Save plans being signed to the DB

* Reload transactions being signed on boot

* Stop reloading TXs being signed (and report it to peers)

* Remove message-box from the processor branch

We don't use it here yet.

* cargo +nightly fmt

* Move back common/zalloc

* Update subxt to 0.27

* Zeroize ^1.5, not 1

* Update GitHub workflow

* Remove usage of SignId in completed
2023-03-16 22:59:40 -04:00
Luke Parker
f374cd7398 Update to ethers 2 2023-03-16 20:16:57 -04:00
Luke Parker
ab1e5c372e Don't use a relative link to link to the audit 2023-03-16 19:49:36 -04:00
Luke Parker
67da08705e cargo update 2023-03-16 19:37:32 -04:00
Luke Parker
0d4b66dc2a Bump package versions 2023-03-16 19:29:22 -04:00
Luke Parker
4ed819fc7d Document crypto crates with audit notices 2023-03-16 19:25:01 -04:00
Luke Parker
74924095e1 Add Cypher Stack's audit of /crypto
The current /crypto folder, as of this commit, is identical, except for the
years in the copyright statements. GitHub's CI also passed for the previous
commit, ensuring the repo's integrity during that commit. This now establishes
trust for /crypto, which will be used to update documentation and create
releases.
2023-03-16 18:38:31 -04:00
Luke Parker
d2c1592c61 Resolve merging crypto-{audit, tweaks} and use the proper transcript in Bitcoin 2023-03-16 16:59:20 -04:00
Luke Parker
64924835ad Merge pull request #254 from serai-dex/nightly-2023-03
March 2023 - Rust Nightly Update
2023-03-16 16:45:19 -04:00
Luke Parker
37e4f2cc50 Merge pull request #255 from serai-dex/crypto-tweaks
Crypto audit/tweaks
2023-03-16 16:43:27 -04:00
Luke Parker
caf37527eb Merge branch 'develop' into crypto-tweaks 2023-03-16 16:43:04 -04:00
Luke Parker
669d2dbffc 3.10.2 Explicitly test RecommendedTranscript 2023-03-15 19:55:07 -04:00
Luke Parker
f4e2da2767 Move where we check the Monero node's protocol
The genesis block has a version of 1, so immediately checking (before new
blocks are added) will cause failures.
2023-03-14 01:54:08 -04:00
Luke Parker
48078d0b4b Remove Protocol::Unsupported 2023-03-13 08:03:13 -04:00
Luke Parker
0e0243639e Resolve clippy error
This was resolved on the processor branch yet not on develop.
2023-03-13 07:57:45 -04:00
Luke Parker
14203bbb46 Use an async Mutex for the Monero distribution
Enables safe async/thread-safe usage.
2023-03-12 04:13:43 -04:00
Luke Parker
f5fa6f020d Remove note about adding in a DB handle
It'd arguably be safer yet it isn't worth the API complexity.
2023-03-12 03:55:17 -04:00
Luke Parker
41a285ddfa Add a TX size check to Monero
This isn't perfect yet should ensure the eventual TX is less than 100k bytes.
2023-03-12 03:54:30 -04:00
Luke Parker
36034c2f72 Move ecdh derivation up to prevent Scalar::one() * ecdh 2023-03-11 10:51:40 -05:00
Luke Parker
5e62072a0f Fix #237 2023-03-11 10:31:58 -05:00
Luke Parker
e56495d624 Prefix arbitrary data with 127
Since we cannot expect/guarantee a payment ID will be included, the previous
position-based code for determining arbitrary data wasn't sufficient.
2023-03-11 05:47:25 -05:00
Luke Parker
71dbc798b5 Fix #251 2023-03-11 05:23:38 -05:00
Luke Parker
4335baa43f cargo update 2023-03-11 04:49:05 -05:00
akildemir
77de28f77a add monero seed support (#252)
* add monero seed support

* fix some of the pr comments

* remove languages module and unnecessary error returns

* Clean classic seed impl

Fixes a few issues regarding Zeroize usage/API safety. Mainly a cleanup.

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-03-10 14:16:00 -05:00
Luke Parker
ad470bc969 \#242 Expand usage of black_box/zeroize
This commit greatly expands the usage of black_box/zeroize on bits, as it
originally should have. It is likely overkill, leading to less efficient
code generation, yet does its best to be comprehensive where comprehensiveness
is extremely annoying to achieve.

In the future, this usage of black_box may be desirable to move to its own
crate.

Credit to @AaronFeickert for identifying the original commit was incomplete.
2023-03-10 06:27:44 -05:00
Luke Parker
62dfc63532 Fix Ethereum, again 2023-03-07 06:25:21 -05:00
Luke Parker
1e201562df Correct doc comments re: HTML tags 2023-03-07 05:34:29 -05:00
Luke Parker
11114dcb74 Further fix the clippy lint controls for Hash on dalek_ff_group::*Point 2023-03-07 05:31:02 -05:00
Luke Parker
837c776297 Make Schnorr modular to its transcript 2023-03-07 05:30:21 -05:00
Luke Parker
6bff3866ea Correct Ethereum 2023-03-07 05:25:25 -05:00
Luke Parker
b0730e3fdf Fix last commit again 2023-03-07 04:47:06 -05:00
Luke Parker
2e78d61752 Fix last commit 2023-03-07 04:39:15 -05:00
Luke Parker
0b8a4ab3d0 Use a backwards compatible clippy lint for impl Hash 2023-03-07 04:26:19 -05:00
Luke Parker
c358090f16 Use black_box to help obscure the dalek-ff-group bool -> Choice conversion
I have no idea if this will actually help, yet it can't hurt.

Feature gated due to MSRV requirements.

Fixes #242.
2023-03-07 04:23:41 -05:00
Luke Parker
adb5f34fda Merge branch 'crypto-audit' into crypto-tweaks 2023-03-07 04:08:34 -05:00
Luke Parker
ed056cceaf 3.5.2 Test non-canonical from_repr
Unfortunately, G::from_bytes doesn't require canonicity so that still can't
be properly tested for. While we could try to detect SEC1, and write tests
on that, there's not a suitably stable/wide enough solution to be worth it.
2023-03-07 04:05:56 -05:00
Luke Parker
2bad06e5d9 Fix #200 2023-03-07 03:55:58 -05:00
Luke Parker
5a9a42f025 Use variable time for verifying PoKs in the DKG 2023-03-07 03:48:16 -05:00
Luke Parker
7d12c785b7 Correct error comment in ff-group-tests 2023-03-07 03:46:55 -05:00
Luke Parker
e08adcc1ac Have Ciphersuite re-export Group 2023-03-07 03:46:16 -05:00
Luke Parker
af5702fccd Make encryption public
It's necessary in order to read encryption messages over the network.
2023-03-07 03:37:30 -05:00
Luke Parker
5037962d3c Rename dkg serialize/deserialize to write/read 2023-03-07 03:37:25 -05:00
Luke Parker
5b26115f81 Add Debug implementations to dkg 2023-03-07 03:26:39 -05:00
Luke Parker
1a99629a4a Add feature-gated serde support for Participant/ThresholdParams
These don't have secret data yet sometimes have value to be communicated.
2023-03-07 03:13:55 -05:00
Luke Parker
b1ea2dfba6 Add support for hashing (as in HashMap) dalek points 2023-03-07 03:10:55 -05:00
Luke Parker
0e8c55e050 Update and remove unused dependencies 2023-03-07 03:06:46 -05:00
Luke Parker
d36fc026dd Remove unused generic in frost 2023-03-07 02:40:09 -05:00
Luke Parker
0bbf511062 Add 'static/Send/Sync to specific traits in crypto
These were proven necessary by our real world usage.
2023-03-07 02:38:47 -05:00
Luke Parker
2729882d65 Update to {k, p}256 0.12 2023-03-07 02:34:10 -05:00
Luke Parker
c37cc0b4e2 Update Zeroize pin to ^1.5 from 1.5 2023-03-07 02:29:59 -05:00
Luke Parker
a053454ae4 3.9.4 Add tests to the transcript crate 2023-03-07 02:25:10 -05:00
Luke Parker
20a33079f8 3.9.3 Document Merlin domain_separate conflict potential and add an asert 2023-03-06 20:16:57 -05:00
Luke Parker
8307d4f6c8 cargo fmt 2023-03-06 08:23:14 -05:00
Luke Parker
db1fefe7c1 Update tendermint/node to latest substrate 2023-03-06 08:20:01 -05:00
Luke Parker
4a81640ab8 Update runtime to latest substrate 2023-03-06 08:14:22 -05:00
Luke Parker
943438628d cargo update 2023-03-06 07:39:43 -05:00
Luke Parker
7efedb9a91 3.9.1 Also correct invalid doc comment 2023-03-06 07:16:04 -05:00
Luke Parker
79124b9a33 3.9.2 Better document rng_seed is allowed to conflict with challenge 2023-03-02 11:19:26 -05:00
Luke Parker
6fec95b1a7 3.7.2 Remove code randomizing which side odd elements end up on
This could still be gamed. For [1, 2, 3], the options were ([1], [2, 3]) or
([1, 2], [3]). This means 2 would always have the maximum round count, and
thus this is still game-able. There's no point to keeping its complexity
accordingly when the algorithm is as efficient as it is.

While a proper random could be used to satisfy 3.7.2, it'd break the
expected determinism.
2023-03-02 11:16:00 -05:00
Luke Parker
2f4f1de488 3.9.1 Fix SecureDigest trait bound 2023-03-02 10:57:22 -05:00
Luke Parker
97374a3e24 3.8.6 Correct transcript to scalar derivation
Replaces the externally passed in Digest with C::H since C is available.
2023-03-02 10:04:18 -05:00
Luke Parker
530671795a 3.8.5 Let the caller pass in a DST for the aggregation hash function
Also moves the aggregator over to Digest. While a bit verbose for this context,
as all appended items were fixed length, it's length prefixing is solid and
the API is pleasant. The downside is the additional dependency which is
in tree and quite compact.
2023-03-02 09:29:37 -05:00
Luke Parker
8b7e7b1a1c 3.8.4 Don't additionally transcript keys with challenges 2023-03-02 09:14:36 -05:00
Luke Parker
053f07a281 3.8.3 Document challenge requirements 2023-03-02 09:08:53 -05:00
Luke Parker
08f9287107 3.8.2 Add Ed25519 RFC 8032 test vectors 2023-03-02 09:06:03 -05:00
Luke Parker
35043d2889 3.8.1 Document RFC 8032 compatibility 2023-03-02 08:45:09 -05:00
Luke Parker
1d2ebdca62 3.7.6, 3.7.7 Optimize multiexp implementations 2023-03-02 06:12:02 -05:00
Luke Parker
e5329b42e6 3.7.5 Further document multiexp functions 2023-03-02 05:49:45 -05:00
Luke Parker
8144956f8a Further document get_unlocked_outputs 2023-03-02 05:31:22 -05:00
Luke Parker
408494f8de 3.7.4 Always zeroize multiexp data
This was handled as part of handling 3.7.1
2023-03-02 03:59:49 -05:00
Luke Parker
8661111fc6 3.7.3 Add multiexp tests 2023-03-02 03:58:48 -05:00
Luke Parker
93d5f41917 3.7.2 Randomize which side odd elements end up on during blame 2023-03-02 01:55:08 -05:00
Luke Parker
15d6be1678 3.7.1 Deduplicate flattening/zeroize code
While the prior intent was to avoid zeroizing for vartime verification, which
is assumed to not have any private data, this simplifies the code and promotes
safety.
2023-03-02 01:13:07 -05:00
Luke Parker
2fd5cd8161 3.6.9 Add several tests to the FROST library
Offset signing is now tested. Multi-nonce algorithms are now tested.
Multi-generator nonce algorithms are now tested. More fault cases are now tested
as well.
2023-03-01 08:02:45 -05:00
Luke Parker
c6284b85a4 3.6.8 Simplify offset splitting
This wasn't done prior to be 'leaderless', as now the participant with the
lowest ID has an extra step, yet this is still trivial. There's also notable
performance benefits to not taking the previous dividing approach, which
performed an exp.
2023-03-01 01:06:13 -05:00
Luke Parker
a42a84e1e8 3.6.7 Seal IetfTranscript 2023-03-01 00:42:01 -05:00
Luke Parker
5a3406bb5f 3.6.6 Further document nonces
This was already a largely documented file. While the terminology is
potentially ambiguous, there's not a clearer path perceived at this time.
2023-03-01 00:35:37 -05:00
Luke Parker
62b3036cbd 3.6.5 Document origin of vectors 2023-02-28 23:23:22 -05:00
Luke Parker
6a15b21949 3.6.4 Document inclusion of Ed448 HRAM vector 2023-02-28 23:14:59 -05:00
Luke Parker
39b3452da1 3.6.3 Check commitment encoding
Also extends tests of 3.6.2 and so on.
2023-02-28 21:57:18 -05:00
Luke Parker
7a05466049 3.6.2 Test nonce generation
There's two ways which this could be tested.

1) Preprocess not taking in an arbitrary RNG item, yet the relevant bytes

This would be an unsafe level of refactoring, in my opinion.

2) Test random_nonce and test the passed in RNG eventually ends up at
random_nonce.

This takes the latter route, both verifying random_nonce meets the vectors
and that the FROST machine calls random_nonce properly.
2023-02-28 21:02:12 -05:00
GitHub Actions
6b5097f0c3 Update nightly 2023-03-01 01:50:10 +00:00
Luke Parker
c1435a2045 3.4.a Panic if generators.len() != scalars.len() for MultiDLEqProof 2023-02-28 00:00:29 -05:00
Luke Parker
969a5d94f2 3.6.1 Document rejection of zero nonces 2023-02-24 06:16:22 -05:00
Luke Parker
93f7afec8b 3.5.2 Add more tests to ff-group-tests
The audit recommends checking failure cases for from_bytes,
from_bytes_unechecked, and from_repr. This isn't feasible.

from_bytes is allowed to have non-canonical values. [0xff; 32] may accordingly
be a valid point for non-SEC1-encoded curves.

from_bytes_unchecked doesn't have a defined failure mode, and by name,
unchecked, shouldn't necessarily fail. The audit acknowledges the tests should
test for whatever result is 'appropriate', yet any result which isn't a failure
on a valid element is appropriate.

from_repr must be canonical, yet for a binary field of 2^n where n % 8 == 0, a
[0xff; n / 8] repr would be valid.
2023-02-24 06:03:56 -05:00
Luke Parker
32c18cac84 3.5.1 Document presence of k256/p256 2023-02-24 05:28:31 -05:00
Luke Parker
65376e93e5 3.4.3 Merge the nonce calculation from DLEqProof and MultiDLEqProof into a
single function

3.4.3 actually describes getting rid of DLEqProof for a thin wrapper around
MultiDLEqProof. That can't be done due to DLEqProof not requiring the std
features, enabling Vecs, which MultiDLEqProof relies on.

Merging the verification statement does simplify the code a bit. While merging
the proof could also be, it has much less value due to the simplicity of
proving (nonce * G, scalar * G).
2023-02-24 05:11:01 -05:00
Luke Parker
6104d606be 3.4.2 Document the DLEq lib 2023-02-24 04:37:20 -05:00
Luke Parker
1a6497f37a 3.3.5 Clarify GeneratorPromotion is only for generators, not curves 2023-02-23 07:21:47 -05:00
Luke Parker
4d6a0bbd7d 3.3.4 Use FROST context throughout Encryption 2023-02-23 07:19:55 -05:00
Luke Parker
2d56d24d9c 3.3.3 (cont) Add a dedicated Participant type 2023-02-23 06:50:45 -05:00
Luke Parker
87dea5e455 3.3.3 Add an assert if polynomial is called with 0
This will only be called with 0 if the code fails to do proper screening of its
arguments. If such a flaw is present, the DKG lib is critically broken (as this
function isn't public). If it was allowed to continue executing, it'd reveal
the secret share.
2023-02-23 04:56:05 -05:00
Luke Parker
8bee62609c 3.3.2 Use a static IV and clarify cipher documentation 2023-02-23 04:44:20 -05:00
Luke Parker
d72c4ca4f7 3.3.1 replace try_from with from 2023-02-23 04:29:38 -05:00
Luke Parker
d929a8d96e 3.2.2 Use a hash to point for random points in dfg 2023-02-23 04:29:17 -05:00
Luke Parker
74647b1b52 3.2.3 Don't yield identity in Group::random 2023-02-23 04:14:07 -05:00
Luke Parker
40a6672547 3.2.1, 3.2.4, 3.2.5. Documentation and tests 2023-02-23 04:05:47 -05:00
Luke Parker
686a5ee364 3.1.4 Further document hash_to_F which may collide 2023-02-23 01:09:22 -05:00
Luke Parker
cb4ce5e354 3.1.3 Use a checked_add for the modulus in secp256k1/P-256 2023-02-23 00:57:41 -05:00
Luke Parker
ac0f5e9b2d 3.1.2 Remove oversize DST handling for code present in elliptic-curve already
Adds a test to ensure that elliptic-curve does in fact handle this properly.
2023-02-23 00:52:13 -05:00
Luke Parker
18ac80671f 3.1.1 Document secp256k1/P-256 hash_to_F 2023-02-23 00:37:19 -05:00
Luke Parker
8260ec1a9e Add another TODO 2023-02-15 20:44:53 -05:00
Luke Parker
07f424b484 cargo fmt 2023-02-15 20:39:22 -05:00
Luke Parker
5de8bf3295 Add additional checks/documentation to monero 2023-02-15 01:56:36 -05:00
Luke Parker
82a096e90e Scanner assert on is_torsion_free 2023-02-14 15:49:16 -05:00
github-actions[bot]
c540f52dda Update nightly (#248)
Co-authored-by: GitHub Actions <>
2023-02-01 22:46:53 -05:00
Luke Parker
264174644f Further workaround #247 2023-01-31 10:48:19 -05:00
Luke Parker
df75782e54 Re-license bitcoin-serai to MIT
It's pretty basic code, yet would still be quite pleasant to the larger
community.

Also adds documentation.
2023-01-31 09:45:25 -05:00
Luke Parker
86ad947261 Add missing semicolon 2023-01-31 08:15:00 -05:00
Luke Parker
a3267034b6 Bitcoin External/Branch/Change addresses
Adds support for offset inputs to the Bitcoin lib
2023-01-31 08:10:28 -05:00
VRx
c6bd00e778 Bitcoin processor (#232)
* serai Dockerfile & Makefile fixed

* added new bitcoin mod & bitcoinhram

* couple changes

* added odd&even check for bitcoin signing

* sign message updated

* print_keys commented out

* fixed signing process

* Added new bitcoin library & added most of bitcoin processor logic

* added new crate and refactored the bitcoin coin library

* added signing test function

* moved signature.rs

* publish set to false

* tests moved back to the root

* added new functions to rpc

* added utxo test

* added new rpc methods and refactored bitcoin processor

* added spendable output & fixed errors & added new logic for sighash & opened port 18443 for bitcoin docker

* changed tweak keys

* added tweak_keys & publish transaction and refactored bitcoin processor

* added new structs and fixed problems for testing purposes

* reverted dockerfile back its original

* reverted block generation of bitcoin to 5 seconds

* deleted unnecessary test function

* added new sighash & added new dbg messages & fixed couple errors

* fixed couple issue & removed unused functions

* fix for signing process

* crypto file for bitcoin refactored

* disabled test_send & removed some of the debug logs

* signing implemented & transaction weight calculation added & change address logic added

* refactored tweak_keys

* refactored mine_block & fixed change_address logic

* implemented new traits to bitcoin processor& refactored bitcoin processor

* added new line to tests file

* added new line to bitcoin's wallet.rs

* deleted Cargo.toml from coins folder

* edited bitcoin's Cargo.toml and added LICENSE

* added new line to bitcoin's Cargo.toml

* added spaces

* added spaces

* deleted unnecessary object

* added spaces

* deleted patch numbers

* updated sha256 parameter for message

* updated tag as const

* deleted unnecessary brackets and imports

* updated rpc.rs to 2 space indent

* deleted unnecessary brackers

* deleted unnecessary brackets

* changed it to explicit

* updated to explicit

* deleted unnecessary parsing

* added ? for easy return

* updated imports

* updated height to number

* deleted unnecessary brackets

* updated clsag to sig & to_vec to as_ref

* updated _sig to schnorr_signature

* deleted unnecessary variable

* updated Cargo.toml of processor and bitcoin

* updated imports of bitcoin processor

* updated MBlock to BBlock

* updated MSignable to BSignable

* updated imports

* deleted mask from Fee

* updated get_block function return

* updated comparison logic for scripts

* updated assert to debug_assert

* updated height to number

* updated txid logic

* updated tweak_keys definition

* updated imports

* deleted new line

* delete HashMap from monero

* deleted old test code parts

* updated test amount to a round number

* changed the test code part back to its original

* updated imports of rpc.rs

* deleted unnecessary return assignments

* deleted get_fee_per_byte

* deleted create_raw_transaction

* deleted fund_raw_transaction

* deleted sign transaction rpc

* delete verify_message rpc

* deleted get_balance

* deleted decode_raw_transaction rpc

* deleted list_transactions rpc

* changed test_send to p2wpkh

* updated imports of test_send

* fixed imports of test_send

* updated bitcoin's mine_block function

* updated bitcoin's test_send

* updated bitcoin's hram and test_signing

* deleted 2 rpc function (is_confirmed & get_transaction_block_number)

* deleted get_raw_transaction_hex

* deleted get_raw_transaction_info

* deleted new_address

* deleted test_mempool_accept

* updated remove(0) to remove(index)

* deleted ger_raw_transaction

* deleted RawTx trait and converted type to Transaction

* reverted raw_hex feature back

* added NotEnoughFunds to CoinError

* changed Sighash to all

* removed lifetime of RpcParams

* changed pub to pub(crate) & changed sig_hash line

* changed taproot_key_spend_signature_hash to internal

* added Clone to RpcError & deleted get_utxo_for

* changed to_hex to as_bytes for weight calculation

* updated SpendableOutput

* deleted unnecessary parentheses

* updated serialize of Output s id field

* deleted unused crate & added lazy_static

* updated RPC init function

* added lazy_static for TAG_HASH & updated imported crates

* changed get_block_index to get_block_number

* deleted get_block_info

* updated get_height to get_latest_block_number

* removed GetBlockWithDetailResult and get_block_with_transactions

* deleted unnecessary imports from rpc_helper

* removed lock and unlock_unspent

* deleted get_transactions and get_transaction and renamed get_raw_transaction to get_transaction

* updated opt_into_json

* changed payment_address and amount to output_script and amount for transcript

* refactored error logic for rpc & deleted anyhow crate

* added a dedicated file for json helper functions

* refactored imports and deleted unused code

* added clippy::non_snake_case

* removed unused Error items

* added new line to Cargo

* rekmoved Block and used bitcoin::Block direcetly

* removed added println and futures.len check

* removed HashMap from coin mod.rs

* updated Testnet to Regtest

* removed unnecessary variable

* updated as_str to &

* removed RawTx trait

* added newline

* changed test transaction to p2pkh

* updated test_send

* updated test_send

* updated test_send

* reformatted bitcoin processor

* moved sighash logic into signmachine

* removed generate_to_address

* added test_address function to bitcoin processor

* updated RpcResponse to enum and added Clone trait

* removed old RpcResponse

* updated shared_key to internal_key

* updated fee part

* updated test_send block logic

* added a test function for getting spendables

* updated tweaking keys logic

* updated calculate_weight logic

* added todo for BitcoinSchnorr Algorithm

* updated calculate_weight

* updated calculate_weight

* updated calculate_weight

* added a TODO for bitcoin's signing process

* removed unused code

* Finish merging develop

* cargo fmt

* cargo machete

* Handle most clippy lints on bitcoin

Doesn't handle the unused transcript due to pending cryptographic considerations.

* Rearrange imports and clippy tests

* Misc processor lint

* Update deny.toml

* Remove unnecessary RPC code

* updated test_send

* added bitcoin ci & updated test-dependencies yml

* fixed bitcoin ci

* updated bitcoin ci yml

* Remove mining from the bitcoin/monero docker files

The tests should control block production in order to test various
circumstances. The automatic mining disrupts assumptions made in testing. Since
we're now using the Bitcoin docker container for testing...

* Multiple fixes to the Bitcoin processor

Doesn't unwrap on RPC errors. Returns the expected connection error.

Fee calculation has a random - 1. This has been removed.

Supports the change address being an Option, as it is. This should not have
been blindly unwrapped.

* Remove unnecessary RPC code

* Further RPC simplifications

* Simplify Bitcoin action

It should not be mining.

* cargo fmt

* Finish RPC simplifications

* Run bitcoind as a daemon

* Remove the requirement on txindex

Saves tens of GB.

Also has attempt_send no longer return a list of outputs. That's incompatible
with this and only relevant to old scheduling designs.

* Remove number from Bitcoin SignableTransaction

Monero requires the current block number for decoy selection. Bitcoin doesn't
have a use.

* Ban coinbase transactions

These are burdened by maturity, so it's critically flawed to support them.

This causes the test_send function to fail as its working was premised on
a coinbase output. While it does make an actual output, it had insufficient
funds for the test's expectations due to regtest halving every 150 blocks.

In order to workaround this, the test will invalidate any existing chain,
offering a fresh start.

Also removes test_get_spendables and simplifies test_send.

* Various simplifications

Modifies SpendableOutput further to not require RPC calls at time of sign.

Removes the need to have get_transaction in the RPC.

* Clean prepare_send

* Update the Bitcoin TransactionMachine to output a Transaction

* Bitcoin TransactionMachine simplifications

* Update XOnly key handling

* Use a single sighash cache

* Move tweak_keys

* Remove unnecessary PSBT sets

* Restore removed newlines

* Other newlines

* Replace calculate_weight's custom math with a dummy TX serialize

* Move BTC TX construction code from processor to bitcoin

* Rename transactions.rs to wallet.rs

* Remove unused crate

* Note TODO

* Clean bitcoin signature test

* Make unit test out of BTC FROST signing test

* Final lint

* Remove usage of PartiallySignedTransaction

---------

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-01-31 07:48:14 -05:00
Luke Parker
fba5b7fed4 Attempt workaround for #247 2023-01-31 07:07:27 -05:00
Luke Parker
affe4300e8 Replace clippy --tests with --all-targets 2023-01-30 07:30:02 -05:00
Luke Parker
b6f9a1f8b6 Rename file with dashes to having underscores 2023-01-30 04:27:39 -05:00
akildemir
9e01588b11 add test send to wallet-rpc with arb data (#246)
* add test send to wallet-rpc with arb data

* convert literals to const
2023-01-30 04:25:46 -05:00
Luke Parker
b0e0fc44cf Add secondary while loop to serai/client's test runner
There's a failing CI run on a node which booted, yet didn't create a genesis
yet. Apparently, the RPC is potentially accessible before the chain is in.

This attempts to resolve that.
2023-01-28 03:32:21 -05:00
Luke Parker
a4fdff3e3b Make progress on #235
I'm still not exactly sure where the trap handler in Monero for this is...
until then, this remains potentially fingerprintable.
2023-01-28 03:18:41 -05:00
Luke Parker
9241bdc3b5 Fix #183 2023-01-28 02:35:32 -05:00
Luke Parker
b253529413 cargo fmt 2023-01-28 01:59:42 -05:00
Luke Parker
2ace339975 Tokens pallet (#243)
* Use Monero-compatible additional TX keys

This still sends a fingerprinting flare up if you send to a subaddress which
needs to be fixed. Despite that, Monero no should no longer fail to scan TXs
from monero-serai regarding additional keys.

Previously it failed becuase we supplied one key as THE key, and n-1 as
additional. Monero expects n for additional.

This does correctly select when to use THE key versus when to use the additional
key when sending. That removes the ability for recipients to fingerprint
monero-serai by receiving to a standard address yet needing to use an additional
key.

* Add tokens_primitives

Moves OutInstruction from in-instructions.

Turns Destination into OutInstruction.

* Correct in-instructions DispatchClass

* Add initial tokens pallet

* Don't allow pallet addresses to equal identity

* Add support for InInstruction::transfer

Requires a cargo update due to modifications made to serai-dex/substrate.

Successfully mints a token to a SeraiAddress.

* Bind InInstructions to an amount

* Add a call filter to the runtime

Prevents worrying about calls to the assets pallet/generally tightens things
up.

* Restore Destination

It was meged into OutInstruction, yet it didn't make sense for OutInstruction
to contain a SeraiAddress.

Also deletes the excessively dated Scenarios doc.

* Split PublicKey/SeraiAddress

Lets us define a custom Display/ToString for SeraiAddress.

Also resolves an oddity where PublicKey would be encoded as String, not
[u8; 32].

* Test burning tokens/retrieving OutInstructions

Modularizes processor_coinUpdates into a shared testing utility.

* Misc lint

* Don't use PolkadotExtrinsicParams
2023-01-28 01:47:13 -05:00
akildemir
f12cc2cca6 Add more tests (#240)
* add wallet-rpc-compatibility tests

* fmt + clippy

* add wallet-rpc receive tests

* add (0,0) subaddress check to standard address
2023-01-24 15:22:07 -05:00
Luke Parker
19664967ed Use Monero-compatible additional TX keys
This still sends a fingerprinting flare up if you send to a subaddress which
needs to be fixed. Despite that, Monero no should no longer fail to scan TXs
from monero-serai regarding additional keys.

Previously it failed becuase we supplied one key as THE key, and n-1 as
additional. Monero expects n for additional.

This does correctly select when to use THE key versus when to use the additional
key when sending. That removes the ability for recipients to fingerprint
monero-serai by receiving to a standard address yet needing to use an additional
key.
2023-01-21 01:29:02 -05:00
Luke Parker
27f5881553 Ensure Amount uses checked ops 2023-01-20 11:04:21 -05:00
Luke Parker
8ca90e7905 Initial In Instructions pallet and Serai client lib (#233)
* Initial work on an In Inherents pallet

* Add an event for when a batch is executed

* Add a dummy provider for InInstructions

* Add in-instructions to the node

* Add the Serai runtime API to the processor

* Move processor tests around

* Build a subxt Client around Serai

* Successfully get Batch events from Serai

Renamed processor/substrate to processor/serai.

* Much more robust InInstruction pallet

* Implement the workaround from https://github.com/paritytech/subxt/issues/602

* Initial prototype of processor generated InInstructions

* Correct PendingCoins data flow for InInstructions

* Minor lint to in-instructions

* Remove the global Serai connection for a partial re-impl

* Correct ID handling of the processor test

* Workaround the delay in the subscription

* Make an unwrap an if let Some, remove old comments

* Lint the processor toml

* Rebase and update

* Move substrate/in-instructions to substrate/in-instructions/pallet

* Start an in-instructions primitives lib

* Properly update processor to subxt 0.24

Also corrects failures from the rebase.

* in-instructions cargo update

* Implement IsFatalError

* is_inherent -> true

* Rename in-instructions crates and misc cleanup

* Update documentation

* cargo update

* Misc update fixes

* Replace height with block_number

* Update processor src to latest subxt

* Correct pipeline for InInstructions testing

* Remove runtime::AccountId for serai_primitives::NativeAddress

* Rewrite the in-instructions pallet

Complete with respect to the currently written docs.

Drops the custom serializer for just using SCALE.

Makes slight tweaks as relevant.

* Move instructions' InherentDataProvider to a client crate

* Correct doc gen

* Add serde to in-instructions-primitives

* Add in-instructions-primitives to pallet

* Heights -> BlockNumbers

* Get batch pub test loop working

* Update in instructions pallet terminology

Removes the ambiguous Coin for Update.

Removes pending/artificial latency for furture client work.

Also moves to using serai_primitives::Coin.

* Add a BlockNumber primitive

* Belated cargo fmt

* Further document why DifferentBatch isn't fatal

* Correct processor sleeps

* Remove metadata at compile time, add test framework for Serai nodes

* Remove manual RPC client

* Simplify update test

* Improve re-exporting behavior of serai-runtime

It now re-exports all pallets underneath it.

* Add a function to get storage values to the Serai RPC

* Update substrate/ to latest substrate

* Create a dedicated crate for the Serai RPC

* Remove unused dependencies in substrate/

* Remove unused dependencies in coins/

Out of scope for this branch, just minor and path of least resistance.

* Use substrate/serai/client for the Serai RPC lib

It's a bit out of place, since these client folders are intended for the node to
access pallets and so on. This is for end-users to access Serai as a whole.

In that sense, it made more sense as a top level folder, yet that also felt
out of place.

* Move InInstructions test to serai-client for now

* Final cleanup

* Update deny.toml

* Cargo.lock update from merging develop

* Update nightly

Attempt to work around the current CI failure, which is a Rust ICE.

We previously didn't upgrade due to clippy 10134, yet that's been reverted.

* clippy

* clippy

* fmt

* NativeAddress -> SeraiAddress

* Sec fix on non-provided updates and doc fixes

* Add Serai as a Coin

Necessary in order to swap to Serai.

* Add a BlockHash type, used for batch IDs

* Remove origin from InInstruction

Makes InInstructionTarget. Adds RefundableInInstruction with origin.

* Document storage items in in-instructions

* Rename serai/client/tests/serai.rs to updates.rs

It only tested publishing updates and their successful acceptance.
2023-01-20 11:00:18 -05:00
Luke Parker
e13cf52c49 Update Cargo.lock
It appears to have changed with the recent Monero tests yet not have been
included in that PR.
2023-01-17 02:35:22 -05:00
Luke Parker
0346aa6964 Make serai-primitives and vs-primitives MIT 2023-01-17 02:26:32 -05:00
Luke Parker
7f8bb1aa9f Make validators archive nodes per #157 2023-01-17 02:17:45 -05:00
akildemir
3b920ad471 Monerolib Improvements (#224)
* convert AddressSpec subbaddress to tuple

* add wallet-rpc tests

* fix payment id decryption bug

* run fmt

* fix CI

* use monero-rs wallet-rpc for tests

* update the subaddress index type

* fix wallet-rpc CI

* fix monero-wallet-rpc CI actions

* pull latest monero for CI

* fix pr issues

* detach monero wallet  rpc

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2023-01-16 16:17:54 -05:00
Luke Parker
757adda2e0 cargo fmt 2023-01-16 16:16:55 -05:00
Luke Parker
a1ecdbf2f3 Update substrate/ to latest substrate 2023-01-16 15:53:01 -05:00
VRx
19ab49cb8c delete compromised key from dockerfile (#231) 2023-01-16 10:51:50 -05:00
Luke Parker
ced89332d2 cargo update
Necessary due to https://github.com/RustCrypto/signatures/issues/615.

Opportunity taken to update Substrate.
2023-01-16 10:42:31 -05:00
Luke Parker
be05e0dd47 Revert "Implement a FROST algorithm for Schnorrkel"
This reverts commit 8ef8b5ca6f.
2023-01-13 18:57:07 -05:00
Luke Parker
8ef8b5ca6f Implement a FROST algorithm for Schnorrkel 2023-01-13 18:52:38 -05:00
Luke Parker
50a4f5938a Correct syntax in prior commit 2023-01-13 17:44:20 -05:00
Luke Parker
d37aff515d Update advisories in deny.toml 2023-01-13 17:42:53 -05:00
Luke Parker
375887bb29 Update licenses 2023-01-11 23:05:31 -05:00
Luke Parker
8ffa5553ff Use ok_or_else instead of ok_or in a couple places in Monero 2023-01-10 06:57:25 -05:00
Luke Parker
422a562c78 Use arduino/setup-protoc@master instead of the tagged v1
The tag is ancient and uses the now-deprecated NodeJS v12. master uses 16.
2023-01-10 00:32:35 -05:00
Luke Parker
ea7c281a47 Move to dtolnay/toolchain (#211)
* Move to dtolnay/toolchain

* Correct dtolnay/toolchain to rust-roolchain

* Pass toolchain by argument instead of revision

Introduces malleability by referring to HEAD of dtolnay, yet GHA errored on the
prior syntax.
2023-01-09 17:21:31 -05:00
Luke Parker
97a94a0bf8 cargo update
Fixes an irrelevant security issue in tokio.
2023-01-08 09:10:38 -05:00
Luke Parker
6b591c0df9 Export Timelocked so documentation for it is generated 2023-01-08 09:09:03 -05:00
Luke Parker
4be3290e40 Convert the FeaturedAddress tuple to a struct
Not only did we already have multiple booleans in it, yet it theoretically
could expand in the future. Not only is this more explicit, it actually cleans
some existing code.
2023-01-07 05:37:43 -05:00
Luke Parker
7b0b8a20ec Standardize serialization within the Monero lib
read for R: Read
write for W: Write
serialize for -> Vec<u8>

Also uses std::io::{self, Read, Write} consistently.
2023-01-07 05:18:35 -05:00
Luke Parker
7508106650 Use an explicit SubaddressIndex type 2023-01-07 04:44:23 -05:00
Luke Parker
ccf4ca2215 Add an ID function to Coin::Block
Also updates to the latest Monero lib API.
2023-01-07 04:03:11 -05:00
Luke Parker
1d6df0099c Exposed a hash-based API for accessing blocks
Also corrects a few panics, which shouldn't have been present, and unnecessary
Cursor uses.
2023-01-07 04:00:12 -05:00
Luke Parker
814a9a8d35 Revert the previous commit's API change 2023-01-07 03:21:06 -05:00
Luke Parker
9662e9e1af Have Coin::get_outputs return by transaction, not by block 2023-01-07 03:13:54 -05:00
Luke Parker
b303649f9d Add OutputType, either external, branch, or change
Used to delineate, by address received to, the intention of the output.
2023-01-07 02:59:53 -05:00
Luke Parker
a646ec5aaa Squashed commit of the following:
commit e0a9e8825d6c22c797fb84e26ed6ef10136ca9c2
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Fri Jan 6 04:24:08 2023 -0500

    Remove Scanner::address

    It either needed to return an Option, panic on misconfiguration, or return a
    distinct Scanner type based on burning bug immunity to offer this API properly.
    Panicking wouldn't be proper, and the Option<Address> would've been... awkward.
    The new register_subaddress function, maintaining the needed functionality,
    also provides further clarity on the intended side effect of the previously
    present Scanner::address function.

commit 7359360ab2fc8c9255c6f58250c214252ce217a4
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Fri Jan 6 01:35:02 2023 -0500

    fmt/clippy from last commit

commit 80d912fc19cd268f3b019a9d9961a48b2c45e828
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Thu Jan 5 19:36:49 2023 -0500

    Add Substrate "assets" pallet

    While over-engineered for our purposes, it's still usable.

    Also cleans the runtime a bit.

commit 2ed2944b6598d75bdc3c995aaf39b717846207de
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Wed Jan 4 23:09:58 2023 -0500

    Remove the timestamp pallet

    It was needed for contracts, which has since been removed. We now no longer
    need it.

commit 7fc1fc2dccecebe1d94cb7b4c00f2b5cb271c87b
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Wed Jan 4 22:52:41 2023 -0500

    Initial validator sets pallet (#187)

    * Initial work on a Validator Sets pallet

    * Update Validator Set docs per current discussions

    * Update validator-sets primitives and storage handling

    * Add validator set pallets to deny.toml

    * Remove Curve from primitives

    Since we aren't reusing keys across coins, there's no reason for it to be
    on-chain (as previously planned).

    * Update documentation on Validator Sets

    * Use Twox64Concat instead of Identity

    Ensures an even distribution of keys. While xxhash is breakable, these keys
    aren't manipulatable by users.

    * Add math ops on Amount and define a coin as 1e8

    * Add validator-sets to the runtime and remove contracts

    Also removes the randomness pallet which was only required by the contracts
    runtime.

    Does not remove the contracts folder yet so they can still be referred to while
    validator-sets is under development. Does remove them from Cargo.toml.

    * Add vote function to validator-sets

    * Remove contracts folder

    * Create an event for the Validator Sets pallet

    * Remove old contracts crates from deny.toml

    * Remove line from staking branch

    * Remove staking from runtime

    * Correct VS Config in runtime

    * cargo update

    * Resolve a few PR comments on terminology

    * Create a serai-primitives crate

    Move types such as Amount/Coin out of validator-sets. Will be expanded in the
    future.

    * Fixes for last commit

    * Don't reserve set 0

    * Further fixes

    * Add files meant for last commit

    * Remove Staking transfer

commit 3309295911d22177bd68972d138aea2f8658eb5f
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Wed Jan 4 06:17:00 2023 -0500

    Reorder coins in README by market cap

commit db5d19cad33ccf067d876b7f5b7cca47c228e2fc
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Wed Jan 4 06:07:58 2023 -0500

    Update README

commit 606484d744b1c6cc408382994c77f1def25d3e7d
Author: Luke Parker <lukeparker5132@gmail.com>
Date:   Wed Jan 4 03:17:36 2023 -0500

    cargo update

commit 3a319b229f
Author: akildemir <aeg_asd@hotmail.com>
Date:   Wed Jan 4 16:26:25 2023 +0300

    update address public API design

commit d9fa88fa76
Author: akildemir <aeg_asd@hotmail.com>
Date:   Mon Jan 2 13:35:06 2023 +0300

    fix clippy error

commit cc722e897b
Merge: cafa9b3 eeca440
Author: akildemir <aeg_asd@hotmail.com>
Date:   Mon Jan 2 11:39:04 2023 +0300

    Merge https://github.com/serai-dex/serai into develop

commit cafa9b361e
Author: akildemir <aeg_asd@hotmail.com>
Date:   Mon Jan 2 11:38:26 2023 +0300

    fix build errors

commit ce5b5f2b37
Merge: f502d67 49c4acf
Author: akildemir <aeg_asd@hotmail.com>
Date:   Sun Jan 1 15:16:25 2023 +0300

    Merge https://github.com/serai-dex/serai into develop

commit f502d67282
Author: akildemir <aeg_asd@hotmail.com>
Date:   Thu Dec 22 13:13:09 2022 +0300

    fix pr issues

commit 26ffb226d4
Author: akildemir <aeg_asd@hotmail.com>
Date:   Thu Dec 22 13:11:43 2022 +0300

    remove extraneous rpc call

commit 0e829f8531
Author: akildemir <aeg_asd@hotmail.com>
Date:   Thu Dec 15 13:56:53 2022 +0300

    add scan tests

commit 5123c7f121
Author: akildemir <aeg_asd@hotmail.com>
Date:   Thu Dec 15 13:56:13 2022 +0300

    add new address functions & comments
2023-01-06 04:33:17 -05:00
Luke Parker
e0deaa5539 fmt/clippy from last commit 2023-01-06 01:35:02 -05:00
Luke Parker
f760c09006 Add Substrate "assets" pallet
While over-engineered for our purposes, it's still usable.

Also cleans the runtime a bit.
2023-01-05 19:45:19 -05:00
Luke Parker
daa88a051f Remove the timestamp pallet
It was needed for contracts, which has since been removed. We now no longer
need it.
2023-01-04 23:09:58 -05:00
Luke Parker
e979883f2d Initial validator sets pallet (#187)
* Initial work on a Validator Sets pallet

* Update Validator Set docs per current discussions

* Update validator-sets primitives and storage handling

* Add validator set pallets to deny.toml

* Remove Curve from primitives

Since we aren't reusing keys across coins, there's no reason for it to be
on-chain (as previously planned).

* Update documentation on Validator Sets

* Use Twox64Concat instead of Identity

Ensures an even distribution of keys. While xxhash is breakable, these keys
aren't manipulatable by users.

* Add math ops on Amount and define a coin as 1e8

* Add validator-sets to the runtime and remove contracts

Also removes the randomness pallet which was only required by the contracts
runtime.

Does not remove the contracts folder yet so they can still be referred to while
validator-sets is under development. Does remove them from Cargo.toml.

* Add vote function to validator-sets

* Remove contracts folder

* Create an event for the Validator Sets pallet

* Remove old contracts crates from deny.toml

* Remove line from staking branch

* Remove staking from runtime

* Correct VS Config in runtime

* cargo update

* Resolve a few PR comments on terminology

* Create a serai-primitives crate

Move types such as Amount/Coin out of validator-sets. Will be expanded in the
future.

* Fixes for last commit

* Don't reserve set 0

* Further fixes

* Add files meant for last commit

* Remove Staking transfer
2023-01-04 22:52:41 -05:00
Luke Parker
52913a6e8d Reorder coins in README by market cap 2023-01-04 06:17:00 -05:00
Luke Parker
056dd1afcb Update README 2023-01-04 06:07:58 -05:00
Luke Parker
bf09b5cc7e cargo update 2023-01-04 03:17:36 -05:00
Luke Parker
eeca440fa7 Offer a multi-DLEq proof which simply merges challenges for n underlying proofs
This converts proofs from 2n elements to 1+n.

Moves FROST over to it. Additionally, for FROST's binomial nonces, provides
a single DLEq proof (2, not 1+2 elements) by proving the discrete log equality
of their aggregate (with an appropriate binding factor). This may be split back
up depending on later commentary...
2023-01-01 09:16:09 -05:00
Luke Parker
49c4acffbb Use a more efficient challenge function in the dleq
The prior one did 64 scalar additions for Ed25519. The new one does 8.
This was optimized by instead of parsing byte-by-byte, u64-by-u64.

Improves perf by ~10-15%.
2023-01-01 05:50:16 -05:00
Luke Parker
5599a052ad Run latest nightly clippy
Also runs clippy on the tests and updates the CI accordingly
2023-01-01 04:18:23 -05:00
Luke Parker
bff5f33616 Correct GITHUB_TOKEN handling 2023-01-01 04:09:10 -05:00
Luke Parker
f10bcfddcb Add repo token to arduino/setup-protoc to avoid rate limiting 2023-01-01 03:25:42 -05:00
Luke Parker
5b3c9bf5d0 DKG Blame (#196)
* Standardize the DLEq serialization function naming

They mismatched from the rest of the project.

This commit is technically incomplete as it doesn't update the dkg crate.

* Rewrite DKG encryption to enable per-message decryption without side effects

This isn't technically true as I already know a break in this which I'll
correct for shortly.

Does update documentation to explain the new scheme. Required for blame.

* Add a verifiable system for blame during the FROST DKG

Previously, if sent an invalid key share, the participant would realize that
and could accuse the sender. Without further evidence, either the accuser
or the accused could be guilty. Now, the accuser has a proof the accused is
in the wrong.

Reworks KeyMachine to return BlameMachine. This explicitly acknowledges how
locally complete keys still need group acknowledgement before the protocol
can be complete and provides a way for others to verify blame, even after a
locally successful run.

If any blame is cast, the protocol is no longer considered complete-able
(instead aborting). Further accusations of blame can still be handled however.

Updates documentation on network behavior.

Also starts to remove "OnDrop". We now use Zeroizing for anything which should
be zeroized on drop. This is a lot more piece-meal and reduces clones.

* Tweak Zeroizing and Debug impls

Expands Zeroizing to be more comprehensive.

Also updates Zeroizing<CachedPreprocess([u8; 32])> to
CachedPreprocess(Zeroizing<[u8; 32]>) so zeroizing is the first thing done
and last step before exposing the copy-able [u8; 32].

Removes private keys from Debug.

* Fix a bug where adversaries could claim to be using another user's encryption keys to learn their messages

Mentioned a few commits ago, now fixed.

This wouldn't have affected Serai, which aborts on failure, nor any DKG
currently supported. It's just about ensuring the DKG encryption is robust and
proper.

* Finish moving dleq from ser/deser to write/read

* Add tests for dkg blame

* Add a FROST test for invalid signature shares

* Batch verify encrypted messages' ephemeral keys' PoP
2023-01-01 01:54:18 -05:00
Luke Parker
3b4c600c60 Have transcripted versions specify their minor version pre-1.0 2022-12-27 00:49:31 -05:00
Luke Parker
bacf31378d Add test vectors for Ciphersuite::hash_to_F 2022-12-25 02:50:10 -05:00
Luke Parker
da8e7e73e0 Re-organize testing strategy and document Ciphersuite::hash_to_F. 2022-12-24 17:08:22 -05:00
Luke Parker
35a4f5bf9f Use count instead of iter.map(|_| 1).sum
Also replaces the expectation the miner TX was first with a check for
Input::Gen.
2022-12-24 15:17:49 -05:00
Luke Parker
445bb3786e Add a dedicated crate for testing ff/group implementors
Provides extensive testing for dalek-ff-group and ed448.

Also includes a fix for an observed bug in ed448.
2022-12-24 15:09:09 -05:00
Luke Parker
6e518f5c22 cargo update due to yanked openssl crate 2022-12-20 23:12:26 -05:00
Luke Parker
e6bf3a4758 Remove stray mention to USDC 2022-12-15 20:35:06 -05:00
Luke Parker
256d920835 Add root_of_unity to dalek-ff-group
Also adds a few more tests.

All functions are now implemented.
2022-12-15 20:33:58 -05:00
Luke Parker
b8db677d4c Impl pow_vartime and sqrt on ed libs 2022-12-15 19:23:42 -05:00
Luke Parker
461504ccbf Update processor to the new Zeroizing ViewPair 2022-12-14 18:40:12 -05:00
Luke Parker
3ec5189fbf Use Zeroize for the ViewPair 2022-12-14 09:27:49 -05:00
Luke Parker
25f1549c6c Move verify_share to return batch-verifiable statements
While the previous construction achieved n/2 average detection,
this will run in log2(n). Unfortunately, the need to keep entropy
around (or take in an RNG here) remains.
2022-12-13 20:31:00 -05:00
Luke Parker
9c65518dc3 Have included return a reference instead of a cloned Vec 2022-12-13 19:40:54 -05:00
Luke Parker
2b042015b5 Replace modular_frost::Curve::hash_to_vec with just hash
There's no reason to copy it to a heap allocated value. The Output implements
AsRef<[u8]> and all uses are satisfied by that.
2022-12-13 19:32:46 -05:00
Luke Parker
783a445a3e Use a challenge from the FROST transcript as context in the DLEq proofs 2022-12-13 19:27:09 -05:00
Luke Parker
ace7506172 Randomly sort included before doing share verification 2022-12-13 15:41:37 -05:00
Luke Parker
4871fc7441 Update scan.rs 2022-12-12 11:27:42 -05:00
Luke Parker
259e5815c8 Correct global index offsetting
Technically, non-0-amount outputs can still appear and this considered them
as part of the global 0-amount pool. Now, only outputs which are 0-amount are
counted.
2022-12-11 10:26:11 -05:00
akildemir
d5a5704ba4 support add/read multiple arbitrary tx data (#189)
* support add/read multiple arbitrary tx  data

* fix clippy errors

* resolve pr issues
2022-12-09 10:58:11 -05:00
Luke Parker
9e82416e7d Correct derives on errors 2022-12-09 09:50:00 -05:00
Luke Parker
d32c865c9a Misc formatting fixes 2022-12-08 22:10:12 -05:00
TheArchitect108
cc917a217d cache solc (#181)
* cache solc

* adds solc binaries to cache
2022-12-08 20:00:57 -05:00
Luke Parker
af86b7a499 Support caching preprocesses in FROST (#190)
* Remove the explicit included participants from FROST

Now, whoever submits preprocesses becomes the signing set. Better separates
preprocess from sign, at the cost of slightly more annoying integrations
(Monero needs to now independently lagrange/offset its key images).

* Support caching preprocesses

Closes https://github.com/serai-dex/serai/issues/40.

I *could* have added a serialization trait to Algorithm and written a ton of
data to disk, while requiring Algorithm implementors also accept such work.
Instead, I moved preprocess to a seeded RNG (Chacha20) which should be as
secure as the regular RNG. Rebuilding from cache simply loads the previously
used Chacha seed, making the Algorithm oblivious to the fact it's being
rebuilt from a cache. This removes any requirements for it to be modified
while guaranteeing equivalency.

This builds on the last commit which delayed determining the signing set till
post-preprocess acquisition. Unfortunately, that commit did force preprocess
from ThresholdView to ThresholdKeys which had visible effects on Monero.

Serai will actually need delayed set determination for #163, and overall,
it remains better, hence it's inclusion.

* Document FROST preprocess caching

* Update ethereum to new FROST

* Fix bug in Monero offset calculation and update processor
2022-12-08 19:04:35 -05:00
Luke Parker
873d27685a Correct FROST DLEq documentation 2022-12-07 20:32:08 -05:00
Luke Parker
12136a9409 Document extensions to FROST
Also makes misc other doc corrections.
2022-12-07 20:23:25 -05:00
Luke Parker
4edba7eb7a Cite #151 in the dkg TODOs 2022-12-07 18:10:20 -05:00
Luke Parker
bec92c10ad Update to ethers 1.0
Removes rust_decimal as a depend, which added borsh and multiple other misc
packages in the previous commit.
2022-12-07 18:05:06 -05:00
Luke Parker
bade7a504e cargo update 2022-12-07 17:56:53 -05:00
Luke Parker
6787e44664 Minor bug fix which missed the last commit 2022-12-07 17:41:07 -05:00
Luke Parker
13977f6287 Clean and document the DKG library's encryption
Encryption used to be inlined into FROST. When writing the documentation, I
realized it was decently hard to review. It also was antagonistic to other
hosted DKG algorithms by not allowing code re-use.

Encryption is now a standalone module, providing clear boundaries and
reusability.

Additionally, the DKG protocol itself used to use the ciphersuite's specified
hash function (with an HKDF to prevent length extension attacks). Now,
RecommendedTranscript is used to achieve much more robust transcripting and
remove the HKDF dependency. This does add Blake2 into all consumers yet is
preferred for its security properties and ease of review.
2022-12-07 17:30:42 -05:00
akildemir
ba157ea84b fix for #166 & cleanup (#184)
* add vscode

* fix for #166 & cleanup

* remove unused self

* fix issues on pr
2022-12-07 10:08:04 -05:00
Luke Parker
7d3a06ac05 Add missing mine_blocks call to the Monero test runner 2022-12-06 06:02:13 -05:00
Luke Parker
a3adf16b8c Update processor for latest Monero lib changes 2022-12-05 17:53:06 -05:00
Luke Parker
f136328c08 Fix tendermint-machine's version in Cargo.lock 2022-12-05 17:26:28 -05:00
Luke Parker
8f352353ba Write a test runner for Monero transactions
Also includes a few fixes for the library itself. Supersedes #172.
2022-12-05 17:25:09 -05:00
Luke Parker
7a16ce78b0 Remove usage of the word "Height" in a comment 2022-12-05 13:02:19 -05:00
Luke Parker
6085a8bb9d Update runtime commentary in Tendermint 2022-12-05 09:04:50 -05:00
Luke Parker
e00c14afe5 Publish tendermint-machine 0.2.0 2022-12-03 18:41:47 -05:00
Luke Parker
8f4d6f79f3 Initial Tendermint implementation (#145)
* Machine without timeouts

* Time code

* Move substrate/consensus/tendermint to substrate/tendermint

* Delete the old paper doc

* Refactor out external parts to generics

Also creates a dedicated file for the message log.

* Refactor <V, B> to type V, type B

* Successfully compiling

* Calculate timeouts

* Fix test

* Finish timeouts

* Misc cleanup

* Define a signature scheme trait

* Implement serialization via parity's scale codec

Ideally, this would be generic. Unfortunately, the generic API serde 
doesn't natively support borsh, nor SCALE, and while there is a serde 
SCALE crate, it's old. While it may be complete, it's not worth working 
with.

While we could still grab bincode, and a variety of other formats, it 
wasn't worth it to go custom and for Serai, we'll be using SCALE almost 
everywhere anyways.

* Implement usage of the signature scheme

* Make the infinite test non-infinite

* Provide a dedicated signature in Precommit of just the block hash

Greatly simplifies verifying when syncing.

* Dedicated Commit object

Restores sig aggregation API.

* Tidy README

* Document tendermint

* Sign the ID directly instead of its SCALE encoding

For a hash, which is fixed-size, these should be the same yet this helps 
move past the dependency on SCALE. It also, for any type where the two 
values are different, smooths integration.

* Litany of bug fixes

Also attempts to make the code more readable while updating/correcting 
documentation.

* Remove async recursion

Greatly increases safety as well by ensuring only one message is 
processed at once.

* Correct timing issues

1) Commit didn't include the round, leaving the clock in question.

2) Machines started with a local time, instead of a proper start time.

3) Machines immediately started the next block instead of waiting for 
the block time.

* Replace MultiSignature with sr25519::Signature

* Minor SignatureScheme API changes

* Map TM SignatureScheme to Substrate's sr25519

* Initial work on an import queue

* Properly use check_block

* Rename import to import_queue

* Implement tendermint_machine::Block for Substrate Blocks

Unfortunately, this immediately makes Tendermint machine capable of 
deployment as  crate since it uses a git reference. In the future, a 
Cargo.toml patch section for serai/substrate should be investigated. 
This is being done regardless as it's the quickest way forward and this 
is for Serai.

* Dummy Weights

* Move documentation to the top of the file

* Move logic into TendermintImport itself

Multiple traits exist to verify/handle blocks. I'm unsure exactly when 
each will be called in the pipeline, so the easiest solution is to have 
every step run every check.

That would be extremely computationally expensive if we ran EVERY check, 
yet we rely on Substrate for execution (and according checks), which are 
limited to just the actual import function.

Since we're calling this code from many places, it makes sense for it to 
be consolidated under TendermintImport.

* BlockImport, JustificationImport, Verifier, and import_queue function

* Update consensus/lib.rs from PoW to Tendermint

Not possible to be used as the previous consensus could. It will not
produce blocks nor does it currenly even instantiate a machine. This is
just he next step.

* Update Cargo.tomls for substrate packages

* Tendermint SelectChain

This is incompatible with Substrate's expectations, yet should be valid 
for ours

* Move the node over to the new SelectChain

* Minor tweaks

* Update SelectChain documentation

* Remove substrate/node lib.rs

This shouldn't be used as a library AFAIK. While runtime should be, and 
arguably should even be published, I have yet to see node in the same 
way. Helps tighten API boundaries.

* Remove unused macro_use

* Replace panicking todos with stubs and // TODO

Enables progress.

* Reduce chain_spec and use more accurate naming

* Implement block proposal logic

* Modularize to get_proposal

* Trigger block importing

Doesn't wait for the response yet, which it needs to.

* Get the result of block importing

* Split import_queue into a series of files

* Provide a way to create the machine

The BasicQueue returned obscures the TendermintImport struct. 
Accordingly, a Future scoped with access is returned upwards, which when 
awaited will create the machine. This makes creating the machine 
optional while maintaining scope boundaries.

Is sufficient to create a 1-node net which produces and finalizes 
blocks.

* Don't import justifications multiple times

Also don't broadcast blocks which were solely proposed.

* Correct justication import pipeline

Removes JustificationImport as it should never be used.

* Announce blocks

By claiming File, they're not sent ovber the P2P network before they 
have a justification, as desired. Unfortunately, they never were. This 
works around that.

* Add an assert to verify proposed children aren't best

* Consolidate C and I generics into a TendermintClient trait alias

* Expand sanity checks

Substrate doesn't expect nor officially support children with less work 
than their parents. It's a trick used here. Accordingly, ensure the 
trick's validity.

* When resetting, use the end time of the round which was committed to

The machine reset to the end time of the current round. For a delayed 
network connection, a machine may move ahead in rounds and only later 
realize a prior round succeeded. Despite acknowledging that round's 
success, it would maintain its delay when moving to the next block, 
bricking it.

Done by tracking the end time for each round as they occur.

* Move Commit from including the round to including the round's end_time

The round was usable to build the current clock in an accumulated 
fashion, relative to the previous round. The end time is the absolute 
metric of it, which can be used to calculate the round number (with all 
previous end times).

Substrate now builds off the best block, not genesis, using the end time 
included in the justification to start its machine in a synchronized 
state.

Knowing the end time of a round, or the round in which block was 
committed to, is necessary for nodes to sync up with Tendermint. 
Encoding it in the commit ensures it's long lasting and makes it readily 
available, without the load of an entire transaction.

* Add a TODO on Tendermint

* Misc bug fixes

* More misc bug fixes

* Clean up lock acquisition

* Merge weights and signing scheme into validators, documenting needed changes

* Add pallet sessions to runtime, create pallet-tendermint

* Update node to use pallet sessions

* Update support URL

* Partial work on correcting pallet calls

* Redo Tendermint folder structure

* TendermintApi, compilation fixes

* Fix the stub round robin

At some point, the modulus was removed causing it to exceed the 
validators list and stop proposing.

* Use the validators list from the session pallet

* Basic Gossip Validator

* Correct Substrate Tendermint start block

The Tendermint machine uses the passed in number as the block's being 
worked on number. Substrate passed in the already finalized block's 
number.

Also updates misc comments.

* Clean generics in Tendermint with a monolith with associated types

* Remove the Future triggering the machine for an async fn

Enables passing data in, such as the network.

* Move TendermintMachine from start_num, time to last_num, time

Provides an explicitly clear API clearer to program around.

Also adds additional time code to handle an edge case.

* Connect the Tendermint machine to a GossipEngine

* Connect broadcast

* Remove machine from TendermintImport

It's not used there at all.

* Merge Verifier into block_import.rs

These two files were largely the same, just hooking into sync structs 
with almost identical imports. As this project shapes up, removing dead 
weight is appreciated.

* Create a dedicated file for being a Tendermint authority

* Deleted comment code related to PoW

* Move serai_runtime specific code from tendermint/client to node

Renames serai-consensus to sc_tendermint

* Consolidate file structure in sc_tendermint

* Replace best_* with finalized_*

We test their equivalency yet still better to use finalized_* in 
general.

* Consolidate references to sr25519 in sc_tendermint

* Add documentation to public structs/functions in sc_tendermint

* Add another missing comment

* Make sign asynchronous

Some relation to https://github.com/serai-dex/serai/issues/95.

* Move sc_tendermint to async sign

* Implement proper checking of inherents

* Take in a Keystore and validator ID

* Remove unnecessary PhantomDatas

* Update node to latest sc_tendermint

* Configure node for a multi-node testnet

* Fix handling of the GossipEngine

* Use a rounded genesis to obtain sufficient synchrony within the Docker env

* Correct Serai d-f names in Docker

* Remove an attempt at caching I don't believe would ever hit

* Add an already in chain check to block import

While the inner should do this for us, we call verify_order on our end 
*before* inner to ensure sequential import. Accordingly, we need to 
provide our own check.

Removes errors of "non-sequential import" when trying to re-import an 
existing block.

* Update the consensus documentation

It was incredibly out of date.

* Add a _ to the validator arg in slash

* Make the dev profile a local testnet profile

Restores a dev profile which only has one validator, locally running.

* Reduce Arcs in TendermintMachine, split Signer from SignatureScheme

* Update sc_tendermint per previous commit

* Restore cache

* Remove error case which shouldn't be an error

* Stop returning errors on already existing blocks entirely

* Correct Dave, Eve, and Ferdie to not run as validators

* Rename dev to devnet

--dev still works thanks to the |. Acheieves a personal preference of 
mine with some historical meaning.

* Add message expiry to the Tendermint gossip

* Localize the LibP2P protocol to the blockchain

Follows convention by doing so. Theoretically enables running multiple 
blockchains over a single LibP2P connection.

* Add a version to sp-runtime in tendermint-machine

* Add missing trait

* Bump Substrate dependency

Fixes #147.

* Implement Schnorr half-aggregation from https://eprint.iacr.org/2021/350.pdf

Relevant to https://github.com/serai-dex/serai/issues/99.

* cargo update (tendermint)

* Move from polling loops to a pure IO model for sc_tendermint's gossip

* Correct protocol name handling

* Use futures mpsc instead of tokio

* Timeout futures

* Move from a yielding loop to select in tendermint-machine

* Update Substrate to the new TendermintHandle

* Use futures pin instead of tokio

* Only recheck blocks with non-fatal inherent transaction errors

* Update to the latest substrate

* Separate the block processing time from the latency

* Add notes to the runtime

* Don't spam slash

Also adds a slash condition of failing to propose.

* Support running TendermintMachine when not a validator

This supports validators who leave the current set, without crashing 
their nodes, along with nodes trying to become validators (who will now 
seamlessly transition in).

* Properly define and pass around the block size

* Correct the Duration timing

The proposer will build it, send it, then process it (on the first 
round). Accordingly, it's / 3, not / 2, as / 2 only accounted for the 
latter events.

* Correct time-adjustment code on round skip

* Have the machine respond to advances made by an external sync loop

* Clean up time code in tendermint-machine

* BlockData and RoundData structs

* Rename Round to RoundNumber

* Move BlockData to a new file

* Move Round to an Option due to the pseudo-uninitialized state we create

Before the addition of RoundData, we always created the round, and on 
.round(0), simply created it again. With RoundData, and the changes to 
the time code, we used round 0, time 0, the latter being incorrect yet 
not an issue due to lack of misuse.

Now, if we do misuse it, it'll panic.

* Clear the Queue instead of draining and filtering

There shouldn't ever be a message which passes the filter under the 
current design.

* BlockData::new

* Move more code into block.rs

Introduces type-aliases to obtain Data/Message/SignedMessage solely from 
a Network object.

Fixes a bug regarding stepping when you're not an active validator.

* Have verify_precommit_signature return if it verified the signature

Also fixes a bug where invalid precommit signatures were left standing 
and therefore contributing to commits.

* Remove the precommit signature hash

It cached signatures per-block. Precommit signatures are bound to each 
round. This would lead to forming invalid commits when a commit should 
be formed. Under debug, the machine would catch that and panic. On 
release, it'd have everyone who wasn't a validator fail to continue 
syncing.

* Slight doc changes

Also flattens the message handling function by replacing an if 
containing all following code in the function with an early return for 
the else case.

* Always produce notifications for finalized blocks via origin overrides

* Correct weird formatting

* Update to the latest tendermint-machine

* Manually step the Tendermint machine when we synced a block over the network

* Ignore finality notifications for old blocks

* Remove a TODO resolved in 8c51bc011d

* Add a TODO comment to slash

Enables searching for the case-sensitive phrase and finding it.

* cargo fmt

* Use a tmp DB for Serai in Docker

* Remove panic on slash

As we move towards protonet, this can happen (if a node goes offline), 
yet it happening brings down the entire net right now.

* Add log::error on slash

* created shared volume between containers

* Complete the sh scripts

* Pass in the genesis time to Substrate

* Correct block announcements

They were announced, yet not marked best.

* Correct pupulate_end_time

It was used as inclusive yet didn't work inclusively.

* Correct gossip channel jumping when a block is synced via Substrate

* Use a looser check in import_future

This triggered so it needs to be accordingly relaxed.

* Correct race conditions between add_block and step

Also corrects a <= to <.

* Update cargo deny

* rename genesis-service to genesis

* Update Cargo.lock

* Correct runtime Cargo.toml whitespace

* Correct typo

* Document recheck

* Misc lints

* Fix prev commit

* Resolve low-hanging review comments

* Mark genesis/entry-dev.sh as executable

* Prevent a commit from including the same signature multiple times

Yanks tendermint-machine 0.1.0 accordingly.

* Update to latest nightly clippy

* Improve documentation

* Use clearer variable names

* Add log statements

* Pair more log statements

* Clean TendermintAuthority::authority as possible

Merges it into new. It has way too many arguments, yet there's no clear path at
consolidation there, unfortunately.

Additionally provides better scoping within itself.

* Fix #158

Doesn't use lock_import_and_run for reasons commented (lack of async).

* Rename guard to lock

* Have the devnet use the current time as the genesis

Possible since it's only a single node, not requiring synchronization.

* Fix gossiping

I really don't know what side effect this avoids and I can't say I care at this
point.

* Misc lints

Co-authored-by: vrx00 <vrx00@proton.me>
Co-authored-by: TheArchitect108 <TheArchitect108@protonmail.com>
2022-12-03 18:38:02 -05:00
Luke Parker
31e8be7b6e Run cargo deny daily 2022-12-02 13:38:30 -05:00
GitHub Actions
ba0f119a53 Update nightly 2022-12-01 18:20:57 -05:00
Luke Parker
0ca52a36ee Restore type complexity checks in CI
Passes due to the remaining type complexity cases being explicitly allowed.
2022-12-01 17:50:52 -05:00
Luke Parker
0350cd803d Add debug assertions to CLSAG/Bulletproofs proving 2022-12-01 11:50:03 -05:00
Luke Parker
f0957c8d52 Add a builder API to the Monero library
Enables more composable construction flows.
2022-12-01 11:35:05 -05:00
Luke Parker
3f503d92fb Remove dbg! from previous commit 2022-11-24 04:57:25 -05:00
Luke Parker
409d1151b2 Add dbg to help resolve https://github.com/serai-dex/serai/issues/166 2022-11-23 22:15:50 -05:00
Luke Parker
00c1f21224 cargo update
Resolves yanking of crossbeam (see https://github.com/crossbeam-rs/crossbeam/issues/931).
2022-11-23 07:44:30 -05:00
Luke Parker
138bb46e19 cargo update
Mandated by https://github.com/dylni/os_str_bytes/issues/14.
2022-11-21 02:23:38 -05:00
Luke Parker
56574f2f5b Add a cargo deny workflow (#89)
* Add a cargo deny workflow

Also trims out a pointless submodule checkout (we have none).

* Remove no longer relevant advisories/allowances

* Patch for array-bytes

* Remove unused properties

* Restore chrono advisory

* Allow MPL-2.0, correct GPL-3.0 allowance specification

* Properly ban copyleft, run on all crates

* Exceptions for Serai crates (AGPL-3.0)

* Remove top comments

* Clarify reasoning for not checking advisories in CI

* Run all checks in CI

While this may bring down an unrelated commit, we can manually review, before creating a followup commit allowing it. If it's critical, then this did its job.
2022-11-16 20:53:35 -06:00
Luke Parker
4a3178ed8f Support handling addresses from other networks
A type alias of MoneroAddress is provided to abstract away the generic. 
To keep the rest of the library sane, MoneroAddress is used everywhere.

If someone wants to use this library with another coin, they *should* be 
able to parse a custom address and then recreate it as a Monero address. 
While that's annoying to them, better them than any person using this 
lib for Monero.

Closes #152.
2022-11-15 00:06:15 -05:00
Luke Parker
83060a914a Have the RPC return the unsupported version when Unsupported 2022-11-14 23:56:28 -05:00
Luke Parker
6acbfdcc45 Support custom Monero protocol definitions 2022-11-14 23:28:39 -05:00
Luke Parker
6f9cf510da Support an authenticated Monero RPC
Closes https://github.com/serai-dex/serai/issues/143.
2022-11-14 23:24:35 -05:00
Luke Parker
b05a223b69 Monero json_rpc_call 2022-11-14 21:49:49 -05:00
Luke Parker
138f7cdfa4 Correct dev-dependencies for modular-frost 2022-11-14 19:20:56 -05:00
Luke Parker
fe1f28c77a cargo update 2022-11-14 04:55:00 -05:00
Luke Parker
b85801b524 Correct the MerlinTranscript Debug impl 2022-11-11 07:07:42 -05:00
Luke Parker
c7121d96ac Add common to Dockerfile 2022-11-11 02:20:10 -05:00
Luke Parker
35ca220bcc Comment the allocator feature
Prevents it from turning on with --all-features, forcing nightly.
2022-11-11 01:23:35 -05:00
Luke Parker
3d9b9b178c Zeroizing allocator (#154)
* Add a zeroizing allocator

* Also implement the allocator API

* Add misisng license file to zalloc

* Slight change to zalloc description
2022-11-10 23:34:40 -06:00
Luke Parker
7334ed1f43 cargo update
Updates Substrate to polkadot-v0.9.33
2022-11-10 23:59:20 -05:00
Luke Parker
84de427d72 Fix https://github.com/serai-dex/serai/issues/150 2022-11-10 22:35:09 -05:00
Luke Parker
d714f2202d Document multiexp
Bumps the crate version to enable publishing.
2022-11-07 18:31:20 -05:00
Luke Parker
be61bff074 cargo fmt 2022-11-05 18:47:57 -04:00
Luke Parker
8de465af87 Have Transcript::append_message take in AsRef<[u8]>, not &[u8]
Simplifies calling it.
2022-11-05 18:43:36 -04:00
Luke Parker
65df18d285 cargo update 2022-11-04 08:07:37 -04:00
Luke Parker
953bece2ea Bump Substrate dependency
Fixes #147.
2022-11-04 08:07:12 -04:00
Luke Parker
5977aeb489 Implement Schnorr half-aggregation from https://eprint.iacr.org/2021/350.pdf
Relevant to https://github.com/serai-dex/serai/issues/99.
2022-11-04 08:04:49 -04:00
github-actions[bot]
8e53522780 November 2022 - Rust Nightly Update (#144)
* Update nightly

* Have the latest nightly clippy pass

Co-authored-by: GitHub Actions <>
Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2022-11-01 00:03:36 -05:00
TheArchitect108
5df74ac9e2 Temporarily strip auth from monerod rpc for tests. 2022-10-31 17:03:23 -05:00
TheArchitect108
16c51ce374 Add svm-rs dependency to getting started. 2022-10-31 15:50:28 -05:00
TheArchitect108
659ff280ac expose monero rpc port from docker 2022-10-31 12:52:29 -05:00
TheArchitect108
0fc336941c Add getting started link to main readme. 2022-10-31 11:35:21 -05:00
TheArchitect108
1a4f2d5621 Add getting started link to main readme. 2022-10-31 11:34:53 -05:00
TheArchitect108
8a20d90868 Add links to other options for running Serai. 2022-10-31 11:32:27 -05:00
TheArchitect108
4101239e0d add deps to make setup easier 2022-10-31 11:10:13 -05:00
Luke Parker
aa4b5e2ca3 Update Cargo.lock 2022-10-29 06:01:32 -04:00
Luke Parker
6ef624ab7e Correct monero's dev dependencies 2022-10-29 05:52:56 -04:00
Luke Parker
e67e406d95 Correct ed448 versioning 2022-10-29 05:25:58 -04:00
Luke Parker
43e38e463f Update FROST version 2022-10-29 05:14:29 -04:00
Luke Parker
1464eefbe3 Correct dleq's zeroize dependency 2022-10-29 05:13:20 -04:00
Luke Parker
6eaed17952 Inline FROST processing functions into the machines' impls
This was done for the DKG and this similarly cleans up here.
2022-10-29 05:10:07 -04:00
Luke Parker
2379855b31 Create a dedicated crate for the DKG (#141)
* Add dkg crate

* Remove F_len and G_len

They're generally no longer used.

* Replace hash_to_vec with a provided method around associated type H: Digest

Part of trying to minimize this trait so it can be moved elsewhere. Vec, 
which isn't std, may have been a blocker.

* Encrypt secret shares within the FROST library

Reduces requirements on callers in order to be correct.

* Update usage of Zeroize within FROST

* Inline functions in key_gen

There was no reason to have them separated as they were. sign probably 
has the same statement available, yet that isn't the focus right now.

* Add a ciphersuite package which provides hash_to_F

* Set the Ciphersuite version to something valid

* Have ed448 export Scalar/FieldElement/Point at the top level

* Move FROST over to Ciphersuite

* Correct usage of ff in ciphersuite

* Correct documentation handling

* Move Schnorr signatures to their own crate

* Remove unused feature from schnorr

* Fix Schnorr tests

* Split DKG into a separate crate

* Add serialize to Commitments and SecretShare

Helper for buf = vec![]; .write(buf).unwrap(); buf

* Move FROST over to the new dkg crate

* Update Monero lib to latest FROST

* Correct ethereum's usage of features

* Add serialize to GeneratorProof

* Add serialize helper function to FROST

* Rename AddendumSerialize to WriteAddendum

* Update processor

* Slight fix to processor
2022-10-29 03:54:42 -05:00
Luke Parker
cbceaff678 Create dedicated message structures for FROST messages (#140)
* Create message types for FROST key gen

Taking in reader borrows absolutely wasn't feasible. Now, proper types
which can be read (and then passed directly, without a mutable borrow)
exist for key_gen. sign coming next.

* Move FROST signing to messages, not Readers/Writers/Vec<u8>

Also takes the nonce handling code and makes a dedicated file for it, 
aiming to resolve complex types and make the code more legible by 
replacing its previously inlined state.

* clippy

* Update FROST tests

* read_signature_share

* Update the Monero library to the new FROST packages

* Update processor to latest FROST

* Tweaks to terminology and documentation
2022-10-25 23:17:25 -05:00
TheArchitect108
ccdb834e6e fixes caching issues (#139)
* fixes caching issues

* update comment
2022-10-24 23:48:11 -05:00
Luke Parker
42a711ee51 Replace the Serai scripts volume for directly copying the files
This should probably be done for the other images as well.
2022-10-23 21:01:11 -04:00
Luke Parker
7ecf8b22cf Mark all sh files as executable 2022-10-23 20:39:37 -04:00
GitHub Actions
595b2da433 Update nightly 2022-10-21 21:26:27 -05:00
Luke Parker
a03a04bcc6 Simplify monthly nightly update workflow 2022-10-21 22:25:32 -04:00
Luke Parker
22713493d8 Move from set-output to $GITHUB_OUTPUT
https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/
2022-10-21 22:19:01 -04:00
Luke Parker
b72af5e185 cargo update
The new ink release should enable using the latest Rust nightly, without 
issue.
2022-10-21 21:40:50 -04:00
Luke Parker
c7f8258e0c Cherry-pick 28308dfe85387fe54c14076089a40c5af1f1b8f7 2022-10-21 05:32:38 -04:00
Luke Parker
21fd56d4d3 Remove extraneous clone 2022-10-21 01:08:15 -05:00
Luke Parker
f6bbc6c89e Correct weight code 2022-10-20 01:27:35 -04:00
Luke Parker
6c996fb3cd Update substrate
Also removes the patch for zip since a new release was issued.

Closes https://github.com/serai-dex/serai/issues/81.

Contracts RPC purged as according to 
https://github.com/paritytech/substrate/pull/12358.
2022-10-20 01:05:36 -04:00
Luke Parker
ec7d8ac67b Remove coin crate
Effective reversion of past few commits by request.
2022-10-16 13:11:32 -04:00
Luke Parker
5ce06cf1b7 Update lib.rs 2022-10-16 07:47:08 -05:00
Luke Parker
4866a3e0e9 Fix previous commit 2022-10-16 00:17:51 -04:00
Luke Parker
ad9bf9b240 Remove unused imports 2022-10-15 23:59:44 -04:00
Luke Parker
19488cf446 Fill out Cargo.tomls
Updated missing fields/sections, even if some won't be used, to 
standardize.

Also made FROST tests feature-gated.
2022-10-15 23:46:22 -04:00
Luke Parker
65664dafa4 Make coin a dedicated library
Closes https://github.com/serai-dex/serai/issues/128.
2022-10-15 23:21:56 -04:00
vrx00
f50148d17a fix for monero docker 2022-10-15 21:42:09 -05:00
Luke Parker
9d376d29e1 Bump FROST version to publish a v11-compliant lib 2022-10-15 22:34:12 -04:00
Luke Parker
e5955f82c7 Remove Monero RPC panics for an invalid/malicious node 2022-10-15 22:32:56 -04:00
Luke Parker
514563cef0 Remove height as a term
Unbeknowst to me, height doesn't have a universal definition of the 
chain length.

Bitcoin defines height as the block number, with getblockcount existing 
for the chain length.

Ethereum uses the unambiguous term "block number".

Monero defines height as both the block number and the chain length.

Instead of arguing about who's right, it's agreed it referring to both 
isn't productive. While we could provide our own definition, taking a 
side, moving to the unambiguous block number prevents future hiccups.

height is now only a term in the Monero code, where it takes its 
Monero-specific definition, as documented in the processor.
2022-10-15 21:39:06 -04:00
Luke Parker
a245ee28c1 Correction for previous commit 2022-10-15 21:07:52 -04:00
Luke Parker
245dcc6083 Have the wallet provide the Monero height for is_confirmed 2022-10-15 21:07:37 -04:00
Luke Parker
826e27c80e Pin CI to solc-select 0.2.1 2022-10-15 20:04:37 -04:00
vrx00
d96cfdf485 serai Dockerfile & Makefile fixed 2022-10-15 19:03:33 -05:00
Luke Parker
4629e88a28 Add is_confirmed with a TODO: Remove
Closes https://github.com/serai-dex/serai/issues/129.
2022-10-15 19:51:59 -04:00
Luke Parker
e9d5fb25c8 Merge branch 'develop' of github.com:serai-dex/serai into develop 2022-10-13 00:38:44 -04:00
Luke Parker
a0a54eb0de Update to FROST v11
Ensures random functions never return zero. This, combined with a check 
commitments aren't 0, causes no serialized elements to be 0.

Also directly reads their vectors.
2022-10-13 00:38:36 -04:00
vrx00
f9310a9968 Cluster Orchestration with Kubernetes (#121)
* add file

* builds + caching fixed

* bitcoin orchestration

* remove default entrypoint

* eth image and cleanup

* working monero

* remove signature file

* cleanup on aisle eth

* cleanup on aisle btc

* eth working

* remove docker ignore

* remove bitcoin image readme

* fix serai builds

* serai clusters

* added readme for docker

* formatting

* share the image

* newlines at EOF

* add multi profile example

* coin order

* coin order

* profile order

* fix grammar

* fix whitespace

* reduce trusted signature set, require at least 3 signatures.

* remove echo

* update comment to ref trusted keys

* comment fix

* use 16 keys, check for laanwj, name compose

* don't use bash

* monero fingerprints & eth fixes

* eth fixes

* remove extra eth keys

* kubernetes deployment implemented with helm charts

* deleted helmignores & added new lines at the end of the file

* deleted duplications & delete unnecessary comments & deactivated service accounts

* deleted generators files

* added a new line to monero/values.yaml

* deleted support for old kubernetes version - ingress.yaml

* added new like to serai/values.yaml

* serai's port name changed

* serai's port name changed

* release name limit was changed to 253

* README.md updated

* fixed Makefile

* deleted platform dependant instructions

* deleted appVersion from .yamls

* added -i parameter for deleting process

* added \ for Makefile

Co-authored-by: TheArchitect108 <75815740+TheArchitect108@users.noreply.github.com>
Co-authored-by: TheArchitect <TheArchitect108@protonmail.com>
2022-10-11 05:27:03 -05:00
Luke Parker
26cbcfb78c Clarify identation policy 2022-10-11 00:40:50 -05:00
Luke Parker
7fa78a9ac2 Delete old Monero submodule 2022-10-02 22:42:47 -05:00
Luke Parker
b334c96906 Allow manually running the tests workflow 2022-09-30 23:26:40 -04:00
Luke Parker
482a8ec209 Update to the latest Serai Substrate (#125)
* Update to the latest Serai Substrate

* Add Protobuf to build dependencies

Docker shouldn't need updating as this should've been added to the image 
in 
2dbace5b01.

* Get substrate to build

* Correct protoc build step

* Remove the benchmarking code

There's some macro resolution error that isn't apparent. I worked on it 
for about half an hour but...

* Remove unnecessary clone

* Correct runtime-benchmarks flag usage
2022-09-29 13:33:09 -05:00
Luke Parker
2c3f36fd3f Single quotes 2022-09-29 10:22:39 -05:00
Luke Parker
9ff4cfab0d Add a repository field to monero-generators 2022-09-29 10:37:25 -04:00
Luke Parker
503ae02cae Version bump monero-generators to consolidate a dependency 2022-09-29 10:36:40 -04:00
Luke Parker
695f7ec5f9 Version bump Monero for documentation purposes 2022-09-29 10:35:11 -04:00
Luke Parker
95aa1ab827 Lock the cache to a specific rustc + dependencies
Apparently, GitHub doesn't write back to the cache, leading to massive 
build times a few moments after its initialization (when a change 
happens invalidating it). While this forces a new cache whenever 
dependencies change, it'll restore from an older set of dependencies in 
that case, still minimizing build times.
2022-09-29 10:34:20 -04:00
Luke Parker
8da0743361 Use sha3 in monero-generators 2022-09-29 08:08:49 -04:00
Luke Parker
2b7c9378c0 Update to FROST v10
Further expands documentation to near-completion.
2022-09-29 07:08:20 -04:00
Luke Parker
7870084b9e Add further FROST documentation 2022-09-29 06:02:43 -04:00
Luke Parker
8d9315b797 Use HashMarker for Transcript and when generating scalars from digests 2022-09-29 05:33:46 -04:00
Luke Parker
ca091a5f04 Expand and correct documentation 2022-09-29 05:25:29 -04:00
Luke Parker
19cd609cba Use doc_auto_cfg 2022-09-29 04:47:55 -04:00
Luke Parker
8b0f0a3713 Publish an alpha version of the Monero crate (#123)
* Label the version as an alpha

* Add versions to Cargo.tomls

* Update to Zeroize 1.5

* Drop patch versions from monero-serai Cargo.toml

* Add a repository field

* Move generators to OUT_DIR

IIRC, I didn't do this originally as it constantly re-generated them. 
Unfortunately, since cargo is complaining about .generators, we have to.

* Remove Timelock::fee_weight

Transaction::fee_weight's has a comment, "Assumes Timelock::None since 
this library won't let you create a TX with a timelock". Accordingly, 
this is dead code.
2022-09-29 01:24:33 -05:00
Luke Parker
49749d96a0 Replace tiny_keccak with sha3 in Monero 2022-09-28 09:29:58 -04:00
Luke Parker
fd48bbd15e Initial documentation for the Monero libraries (#122)
* Document all features

* Largely document the Monero libraries

Relevant to https://github.com/serai-dex/serai/issues/103 and likely 
sufficient to get this removed from 
https://github.com/serai-dex/serai/issues/102.
2022-09-28 07:44:49 -05:00
Luke Parker
f48a48ec3f Support v1 transactions
Closes https://github.com/serai-dex/serai/issues/117.
2022-09-28 05:28:42 -04:00
Luke Parker
5a4eb0a076 cargo update
Removes the potential applicability of CVE-2021-3520.
2022-09-18 15:30:38 -04:00
Luke Parker
2edf61614f Revert nightly workflow testing data 2022-09-18 15:27:47 -04:00
Luke Parker
0431089592 Correct PR creation in workflow 2022-09-18 15:25:43 -04:00
Luke Parker
a52b2e2148 Add a workflow file to update the nightly version every month (#118)
Part of https://github.com/serai-dex/serai/issues/116
2022-09-18 13:49:18 -05:00
Luke Parker
2393cdfe8c Specify shell in build-dependencies 2022-09-17 16:29:44 -04:00
Luke Parker
34ffdfa8be Use a fixed nightly version for the wasm toolchain 2022-09-17 16:26:56 -04:00
Luke Parker
b2f6c23e2f clippy --all-features 2022-09-17 06:07:49 -04:00
Luke Parker
7e17b4f2db Correct the fmt workflow 2022-09-17 04:40:50 -04:00
Luke Parker
c96b595f42 Pin nightly used for fmt/clippy
As per https://github.com/serai-dex/serai/issues/116.
2022-09-17 04:37:43 -04:00
Luke Parker
65c20638ce fmt/clippy 2022-09-17 04:35:08 -04:00
Luke Parker
fd6c58805f Merge branch 'develop' of github.com:serai-dex/serai into develop 2022-09-16 12:18:31 -04:00
Luke Parker
7d4fcdea9e Update FROST Ed448 per request 2022-09-16 12:16:37 -04:00
TheArchitect108
978304e224 Cluster Orchestration with Docker Compose (#114)
* add file

* builds + caching fixed

* bitcoin orchestration

* remove default entrypoint

* eth image and cleanup

* working monero

* remove signature file

* cleanup on aisle eth

* cleanup on aisle btc

* eth working

* remove docker ignore

* remove bitcoin image readme

* fix serai builds

* serai clusters

* added readme for docker

* formatting

* share the image

* newlines at EOF

* add multi profile example

* coin order

* coin order

* profile order

* fix grammar

* fix whitespace

* reduce trusted signature set, require at least 3 signatures.

* remove echo

* update comment to ref trusted keys

* comment fix

* use 16 keys, check for laanwj, name compose

* don't use bash

* monero fingerprints & eth fixes

* eth fixes

* remove extra eth keys
2022-09-12 15:01:14 -05:00
Luke Parker
31b64b3082 Update according to the latest clippy 2022-09-04 21:23:38 -04:00
Luke Parker
73566e756d Minimize use of lazy_static in ed448
Increases usage of const values along with overall Field impl sanity 
with regards to the crypto_bigint backend.
2022-08-31 03:33:19 -04:00
Luke Parker
a59bbe7635 Impl is_odd for dfg::Scalar 2022-08-31 01:05:30 -04:00
Luke Parker
faa43b6874 Move the featured address vectors to a vectors folder 2022-08-31 01:01:51 -04:00
Luke Parker
cc0c6fb5ac Apply the optimized pow to dalek-ff-group
Saves ~40% from Monero hash_to_curve, assisting with #68.
2022-08-31 00:57:23 -04:00
Luke Parker
c5256d9b06 Use ChaCha20 instead of ChaCha12
Despite being slower and only used for blinding values, its still 
extremely performant. 20 is far more standard and will avoid an eye 
raise from reviewers.
2022-08-30 20:01:46 -04:00
Luke Parker
6093f4ec93 Fix clippy 2022-08-30 16:48:59 -04:00
Luke Parker
139dcde69c Support including arbitrary data in TXs and return it with outputs
Fixes a bug where all payments identified as being to (0, 0) instead of 
their actual subaddress.
2022-08-30 15:42:23 -04:00
Luke Parker
09fdc1b77a Close https://github.com/serai-dex/serai/issues/93 2022-08-30 14:35:33 -04:00
Luke Parker
8d56c43f5e Update to work against the latest foundry
Links the Ethereum contract tests as well
2022-08-30 02:13:53 -04:00
Luke Parker
91f62672d3 fmt again
I botched it. I am sorry.
2022-08-30 01:52:00 -04:00
Luke Parker
6045f4ae59 fmt/clippy for previous commit 2022-08-30 01:05:18 -04:00
Luke Parker
d620231530 Remove Monero torsion-free requirement and make output keys 32 bytes
Maintains the torsion-free requirement in the one place it's used (key 
images).

In the modern day, the output keys are checked to be points, yet in 
older protocol versions they were allowed to be arbitrary bytes.

Closes https://github.com/serai-dex/serai/issues/23 and 
https://github.com/serai-dex/serai/issues/25.
2022-08-30 01:02:55 -04:00
Luke Parker
ee6316b26b Use a Group::random which doesn't have a known DL
While Group::random shouldn't be used instead of a hash to curve, anyone 
who did would've previously been insecure and now isn't.

Could've done a recover_x and a raw Point construction, followed by a 
cofactor mul, to avoid the serialization, yet the serialization ensures 
full validity under the standard from_bytes function. THis also doesn't 
need to be micro-optimized.
2022-08-29 13:02:20 -04:00
Luke Parker
b97713aac7 Add unnecessary imports to the Ed448 backend to enable publishing
Doesn't change dependencies.
2022-08-29 03:49:40 -04:00
Luke Parker
d6a31863c4 Version bump dalek-ff-group 2022-08-29 03:46:48 -04:00
Luke Parker
081b9a1975 FROST Ed448 (#107)
* Theoretical ed448 impl

* Fixes

* Basic tests

* More efficient scalarmul

Precomputes a table to minimize additions required.

* Add a torsion test

* Split into a constant and variable time backend

The variable time one is still far too slow, at 53s for the tests (~5s a 
scalarmul). It should be usable as a PoC though.

* Rename unsafe Ed448

It's not only unworthy of the Serai branding and deserves more clarity
in the name.

* Add wide reduction to ed448

* Add Zeroize to Ed448

* Rename Ed448 group.rs to point.rs

* Minor lint to FROST

* Ed448 ciphersuite with 8032 test vector

* Macro out the backend fields

* Slight efficiency improvement to point decompression

* Disable the multiexp test in FROST for Ed448

* fmt + clippy ed448

* Fix an infinite loop in the constant time ed448 backend

* Add b"chal" to the 8032 context string for Ed448

Successfully tests against proposed vectors for the FROST IETF draft.

* Fix fmt and clippy

* Use a tabled pow algorithm in ed448's const backend

* Slight tweaks to variable time backend

Stop from_repr(MODULUS) from passing.

* Use extended points

Almost two orders of magnitude faster.

* Efficient ed448 doubling

* Remove the variable time backend

With the recent performance improvements, the constant time backend is 
now 4x faster than the variable time backend was. While the variable 
time backend remains much faster, and the constant time backend is still 
slow compared to other libraries, it's sufficiently performant now.

The FROST test, which runs a series of multiexps over the curve, does 
take 218.26s while Ristretto takes 1 and secp256k1 takes 4.57s.

While 50x slower than secp256k1 is horrible, it's ~1.5 orders of 
magntiude, which is close enough to the desire stated in 
https://github.com/serai-dex/serai/issues/108 to meet it.

Largely makes this library safe to use.

* Correct constants in ed448

* Rename unsafe-ed448 to minimal-ed448

Enables all FROST tests against it.

* No longer require the hazmat feature to use ed448

* Remove extraneous as_refs
2022-08-29 02:32:59 -05:00
Luke Parker
f71f19e26c Add a repository field to the DLEq Cargo.toml 2022-08-26 09:10:34 -04:00
Luke Parker
33ee6b7a02 Bump FROST version 2022-08-26 09:09:18 -04:00
Luke Parker
dbd39b557e Add a basic CONTRIBUTING file (#77)
* Add a basic CONTRIBUTING file

* Mild updates to CONTRIBUTING, mainly about formatting

* Add import ordering to CONTRIBUTING
2022-08-26 07:41:00 -05:00
Luke Parker
a8a00598e4 Update to FROST v8 2022-08-26 05:59:43 -04:00
Luke Parker
4881ddae87 Update Monero crate description 2022-08-25 04:02:30 -04:00
Luke Parker
546b772be3 Clarify licensing per https://github.com/serai-dex/serai/issues/13
Implements bullets 2-4 of 
https://github.com/serai-dex/serai/issues/13#issuecomment-1212689876.
2022-08-25 04:02:13 -04:00
Luke Parker
5b2940e161 Lint previous commit 2022-08-22 13:35:49 -04:00
Luke Parker
5c106cecf6 Fix https://github.com/serai-dex/serai/issues/105 2022-08-22 12:15:14 -04:00
Luke Parker
5a1f011db8 Fix https://github.com/serai-dex/serai/issues/106 2022-08-22 08:57:36 -04:00
Luke Parker
5751204a98 Introduce a subaddress capable scanner type 2022-08-22 08:32:09 -04:00
Luke Parker
19f5fd8fe9 Include subaddress and payment ID in SpendableOutput 2022-08-22 07:22:54 -04:00
Luke Parker
f0b914c721 Support sending to integrated addresses 2022-08-22 06:54:01 -04:00
Luke Parker
99b683b843 Update processor 2022-08-22 05:27:35 -04:00
Luke Parker
331aa6c644 Implement Featured Addresses
Closes https://github.com/serai-dex/serai/issues/37.
2022-08-22 04:27:58 -04:00
Luke Parker
3fffea178f Visibility fixes 2022-08-21 11:29:01 -04:00
Luke Parker
d596eeee6e Update visibility of various items in Monero 2022-08-21 11:06:17 -04:00
Luke Parker
60d93c4b2d Update BP fee_weight
Closes https://github.com/serai-dex/serai/issues/8.
2022-08-21 10:35:10 -04:00
Luke Parker
755d021f8e Canonicalize read_varint
There is a slight note we only implement u64 varint's, while Monero does 
it for arbitrary uints, yet that's not relevant at this time. It is 
documented in #25.
2022-08-21 08:58:28 -04:00
Luke Parker
c5beee5648 Fix #48
Removes monero, yet we still use monero-rs's base58 and epee libraries.
2022-08-21 08:41:19 -04:00
Luke Parker
d12507e612 Fix a DoS in Monero
A malicious TX could cause an arbitrary amount of memory to be allocated 
despite not even containing that amount of data.
2022-08-21 07:52:49 -04:00
Luke Parker
8af3a9ea51 Correct Monero build script 2022-08-21 06:48:59 -04:00
Luke Parker
603a3f8c9f Generate Bulletproofs(+) generators at compile time
Creates a new monero-generators crate so the monero crate can run the 
code in question at build time.

Saves several seconds from running the tests.

Closes https://github.com/serai-dex/serai/issues/101.
2022-08-21 06:36:53 -04:00
Luke Parker
577fe99a08 Fix https://github.com/serai-dex/serai/issues/18 2022-08-21 05:13:07 -04:00
Luke Parker
00d61286b1 Lint Monero serialization 2022-08-21 04:41:55 -04:00
Luke Parker
76db682a25 Replace static Scalar with a uint conversion in BP+ 2022-08-21 00:46:23 -04:00
Luke Parker
f319966dca Lint Getting Started document 2022-08-21 00:45:41 -04:00
TheArchitect108
74e2d230e8 Simple getting started doc (#91)
* simple getting started doc

* Swap Old Ubuntu and Solc

* Fixes indents and removes OS preferences

* drops indents from code blocks
2022-08-20 23:26:16 -05:00
Luke Parker
c53e7ad6c7 Bump dalek-ff-group version 2022-08-18 17:11:55 -04:00
J. Burfeind
a2aa182cc4 Conditional negate (#90)
* Reorder tests in dalek-ff-group

* Add required method for ConditionallyNegatable

Adds lifetime bound implementation `Neg`
for borrowed FieldElements in dalek-ff-group.
2022-08-18 15:02:31 -05:00
aiyion.prime
45912d6837 Add implementation for sqrt_ratio_i()
in dalek-ff-group
2022-08-18 13:38:57 -05:00
Luke Parker
0543f3c469 Patch Benchmark 2022-08-16 17:43:46 -04:00
Luke Parker
f809827acd cargo update
Fixes https://github.com/serai-dex/serai/issues/82.
2022-08-16 03:44:32 -04:00
Luke Parker
a73bcc908f Add missing test annotation 2022-08-13 19:43:43 -04:00
Luke Parker
75c3cdc5af Comment the previous commit
Despite the intentions of https://github.com/serai-dex/serai/issues/85, 
it failed to be practically faster :/

Updates a DLEq test to be better as well.
2022-08-13 19:43:18 -04:00
Luke Parker
062cd77a98 Close https://github.com/serai-dex/serai/issues/85 2022-08-13 19:21:12 -04:00
Luke Parker
5d7798c5fb FROST clippy 2022-08-13 09:46:54 -04:00
Luke Parker
280fc441a7 Lint FROST
Corrects ertrors introduced a couple commits ago as well.
2022-08-13 08:50:59 -04:00
Luke Parker
454b73aec3 Add FROST key promotion
Closes https://github.com/serai-dex/serai/issues/72.

Adds a trait, with a commented impl for a semi-unsafe niche feature, 
which will be used in https://github.com/serai-dex/serai/issues/73.
2022-08-13 08:50:59 -04:00
Luke Parker
885d816309 Use a non-constant generator in FROST 2022-08-13 08:50:59 -04:00
Luke Parker
6f776ff004 Recalculate the group key instead of serializing it
Solves an issue with promotion.
2022-08-13 08:50:59 -04:00
Luke Parker
73205c5f96 Transcript the offset as a point
Potentially improves privacy with the reversion to a coordinator 
setting, where the coordinator is the only party with the offset. While 
any signer (or anyone) can claim key A relates to B, they can't prove it 
without the discrete log of the offset. This enables creating a signing 
process without a known offset, while maintaining a consistent 
transcript format.

Doesn't affect security given a static generator. Does have a slight 
effect on performance.
2022-08-13 08:50:59 -04:00
J. Burfeind
a58b3a133c Add implementation for is_odd() (#79)
in dalek-ff-group

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2022-08-12 15:05:48 -05:00
J. Burfeind
169d5e26ca Add constant EDWARDS_D in dalek-ff-group (#78) 2022-08-12 15:00:55 -05:00
Luke Parker
96a49d8a88 Remove unnecessary parentheses 2022-08-12 15:53:48 -04:00
Luke Parker
a423c23c1e Use zeroize instead of 0-sets 2022-08-12 01:14:13 -04:00
Luke Parker
42a3d38b48 Zeroize buffer used in Scalar::from_hash
from_hash is frequently used for private key/nonce generation, making 
this buffer a copy of private keys/nonces.
2022-08-04 14:40:54 -04:00
Luke Parker
797be71eb3 Utilize zeroize (#76)
* Apply Zeroize to nonces used in Bulletproofs

Also makes bit decomposition constant time for a given amount of 
outputs.

* Fix nonce reuse for single-signer CLSAG

* Attach Zeroize to most structures in Monero, and ZOnDrop to anything with private data

* Zeroize private keys and nonces

* Merge prepare_outputs and prepare_transactions

* Ensure CLSAG is constant time

* Pass by borrow where needed, bug fixes

The past few commitments have been one in-progress chunk which I've 
broken up as best read.

* Add Zeroize to FROST structs

Still needs to zeroize internally, yet next step. Not quite as 
aggressive as Monero, partially due to the limitations of HashMaps, 
partially due to less concern about metadata, yet does still delete a 
few smaller items of metadata (group key, context string...).

* Remove Zeroize from most Monero multisig structs

These structs largely didn't have private data, just fields with private 
data, yet those fields implemented ZeroizeOnDrop making them already 
covered. While there is still traces of the transaction left in RAM, 
fully purging that was never the intent.

* Use Zeroize within dleq

bitvec doesn't offer Zeroize, so a manual zeroing has been implemented.

* Use Zeroize for random_nonce

It isn't perfect, due to the inability to zeroize the digest, and due to 
kp256 requiring a few transformations. It does the best it can though.

Does move the per-curve random_nonce to a provided one, which is allowed 
as of https://github.com/cfrg/draft-irtf-cfrg-frost/pull/231.

* Use Zeroize on FROST keygen/signing

* Zeroize constant time multiexp.

* Correct when FROST keygen zeroizes

* Move the FROST keys Arc into FrostKeys

Reduces amount of instances in memory.

* Manually implement Debug for FrostCore to not leak the secret share

* Misc bug fixes

* clippy + multiexp test bug fixes

* Correct FROST key gen share summation

It leaked our own share for ourself.

* Fix cross-group DLEq tests
2022-08-03 03:25:18 -05:00
Luke Parker
a30568ff57 Add init function for BP statics
Considering they take 7 seconds to generate, thanks to #68, the ability 
to generate them at the start instead of on first BP is greatly 
appreciated.

Also performs minor cleanups regarding BPs.
2022-08-02 15:52:27 -04:00
Luke Parker
9221dbf048 Bulletproofs+ Verification 2022-08-01 23:30:24 -04:00
Luke Parker
d07fe34a24 Reorganize bulletproofs 2022-07-31 23:12:45 -04:00
Luke Parker
1c4707136c Ban unreduced points in Monero 2022-07-31 22:46:46 -04:00
Luke Parker
6340607827 BP Verification (#75)
* Use a struct in an enum for Bulletproofs

* verification bp working for just one proof

* add some more assert tests

* Clean BP verification

* Implement batch verification

* Add a debug assertion w_cache isn't 0

It's initially set to 0 and if not updated, this would be broken.

* Correct Monero workflow yaml

* Again try to corrent Monero workflow yaml

* Again

* Finally

* Re-apply weights as required by Bulletproofs

Removing these was insecure and my fault.

Co-authored-by: DangerousFreedom <dangfreed@tutanota.com>
2022-07-31 21:45:53 -05:00
Luke Parker
0453b6cbc1 Correct Monero workflow yaml 2022-07-30 19:32:27 -04:00
Luke Parker
8f76e67f57 Rename dleq-serai to dleq 2022-07-30 18:35:39 -04:00
Luke Parker
aeb85b47ba Reduce amount of tests run in monero-tests 2022-07-30 04:49:55 -04:00
Luke Parker
534f951165 Consolidate GitHub CI actions, split out Monero (#71)
* Consolidate GitHub CI actions, split out Monero

build now includes the specified Rust toolchain/components.

Added a test dependencies action which grabs Foundry and Monero.

Split the Monero v14 job into a matrixed job in its own workflow flow. 
It's now only run when Monero has changes.

* Correct Monero unit/integration tests run timing

Additionally tests a feature-less Monero build.

Also removes a pointless Monero file, which already should have been 
removed, causing this workflow to be triggered.

* Correct exclusion and paths

Updates to FROST should re-run the Monero tests to ensure it didn't 
introduce API incompatibilities.
2022-07-29 09:36:09 -05:00
Luke Parker
33c55b8506 Extend the test workflow to also test against Monero v14 (v0.17) 2022-07-28 23:31:27 -04:00
Luke Parker
bba93a64c2 Implement view tags 2022-07-27 06:29:14 -04:00
Luke Parker
755dc84859 Replace rand with rand_core where possible
Turns out rand_core offers OsRng.
2022-07-27 05:45:08 -04:00
Luke Parker
023afaf7ce Bulletproofs+ (#70)
* Initial stab at Bulletproofs+

Does move around the existing Bulletproofs code, does still work as 
expected.

* Make the Clsag RCTPrunable type work with BP and BP+

* Initial set of BP+ bug fixes

* Further bug fixes

* Remove RING_LEN as a constant

* Monero v16 TX support

Doesn't implement view tags, nor going back to v14, nor the updated BP 
clawback logic.

* Support v14 and v16 at the same time
2022-07-27 04:05:43 -05:00
Luke Parker
37b8e3c025 Modularize Bulletproofs in prep for BP+ 2022-07-26 08:06:56 -04:00
Luke Parker
60e15d5160 Remove re-calculation of N
Moves most BP assertions to debug.
2022-07-26 05:31:15 -04:00
Luke Parker
7d9834be87 Correct clippy, remove Monero build depends 2022-07-26 03:48:46 -04:00
Luke Parker
696da8228e Remove Monero as a dependency
Introduces missing CLSAG checks. The only difference now should be the 
additional rejection of torsioned points, which is relevant to 
https://github.com/serai-dex/serai/issues/25. Considering this is only 
currently used for FROST verification, this should be fine.

Closes https://github.com/serai-dex/serai/issues/19 by making it 
irrelevant.

Increases priority of https://github.com/serai-dex/serai/issues/68, as 
now it's used for the BP generators which are done at first-proof.

Also merges BP's stricter hash_to_point with the library's, since CLSAG 
has the same bound.
2022-07-26 03:25:57 -04:00
Luke Parker
ee29f6d6d8 Implement Bulletproofs in Rust (#69)
* Initial attempt at Bulletproofs

I don't know why this doesn't work. The generators and hash_cache lines
up without issue. AFAICT, the inner product proof is valid as well, as
are all included formulas.

* Add yinvpow asserts

* Clean code

* Correct bad imports

* Fix the definition of TWO_N

Bulletproofs work now :D

* Tidy up a bit

* fmt + clippy

* Compile a variety of XMR dependencies with optimizations, even under dev

The Rust bulletproof implementation is 8% slower than C right now, under 
release. This is acceptable, even if suboptimal. Under debug, they take 
a quarter of a second to two seconds though, depending on the amount of 
outputs, which justifies this move.

* Remove unnecessary deref in BPs
2022-07-26 02:05:15 -05:00
Luke Parker
3711e13009 Remove old duplicates of the AGPL-3.0 2022-07-24 09:33:08 -04:00
Luke Parker
f25bd88030 Test bulletproof creation and verification 2022-07-24 09:00:55 -04:00
Luke Parker
10ab467160 Don't use a constant for H yet re-calculate it 2022-07-24 08:57:33 -04:00
Luke Parker
1362764b2b Only cache cargo registry and git 2022-07-23 07:10:25 -04:00
Luke Parker
18a1d15f78 Use composite actions in CI (#65)
* Attempt composite actions in CI

* Remove needs monero-daemon for the action

* Correct actions folder layout

* Remove empty inputs/outputs, add shell

* Try moving env declaration spot

* Remove usage of env

* Cached Rust composite action

* Replace [] with ""

Remove empty outputs
2022-07-23 06:05:31 -05:00
Luke Parker
bb3ffa9021 Add social links to the README 2022-07-23 05:20:36 -04:00
Luke Parker
5b80ead18c Remove the build CI task now
It's identical to test, except it doesn't grab Foundry nor spawn a 
Monero regtest daemon. It doubles the amount of time test takes though, 
as it's doing everything twice.

While it may have value as a component, we're not using it like that 
right now, and if desired, we could add it back. While it may have value 
to produce binaries, we're note doing that either, and it wasn't 
building in release.
2022-07-23 05:07:13 -04:00
Luke Parker
42d62c38b9 Remove the Monero build (#64)
* Remove the Monero CMake and make

* Download the Monero daemon instead of building it

* Cache the Monero daemon

Prevents hammering the Monero servers, should reduce CI time.

* Correct YAML

* Add back sodium-dev

* Create an independent job for downloading the Monero daemon

Improves parallelism while decreasing the amount of work re-done if 
build fails. Also increases modularity.

* Correct Monero job definition

* Correct skipping the Monero download on cache hit
2022-07-23 03:35:32 -05:00
Luke Parker
b80c1bec4c Update dependencies
ethers previously used a git spec due to depending on not-yet-published 
updates. Now that they've been released, a properly published version is 
used.
2022-07-22 12:36:30 -04:00
noot
bd93d6ec8a set up CI (#45)
* begin to setup ci

* attempt to fix build

* fix paths in build script

* fix

* satisfy clippy

* update fmt check to use nightly

* use nightly for build

* fmt

* fix fmt install

* update test script

* try to fix fmt

* merge w develop

* maybe fix build script

* install wasm toolchain

* install solc-select, use stable rust to build

* Correct clippy warnings

Currently intended to be done with:
cargo clippy --features "recommended merlin batch serialize experimental 
ed25519 ristretto p256 secp256k1 multisig" -- -A clippy::type_complexity 
-A dead_code

* Remove try-runtime

I tried to get this to work for an hour. I have no idea why it doesn't, 
yet it doesn't.

* Rewrite workflow

Splits tasks into a more modular structure. Also uses 
actions-rs/toolchain.

* Add a cache

* Immediately try building ETH/Monero while this is fixed

Adds solc-select use.

* Revert selective advance building of ETH/XMR

ETH builds now, so it hopefully should work now.

Also moves from on push to on push to develop.

* Install Monero runtime dependencies

Specify missing Rust toolchain setting.

* Correct multi-line commands

* Fix multi-line commands again

Cache Ethereum artifacts.

* Add Foundry

* Move Clippy under build

* Minimal rustup

Adds wasm Clippy. Puts Clippy before build.

* Use nightly clippy

* Remove old clippy call from under build

* Have the Monero build script support ARCH specification

Requirement for CI.

* Add WASM toolchain to tests

* Remove Ethereum cache which did not work as needed

* Remove extraneous quotes which broke builds on Arch

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2022-07-22 11:31:29 -05:00
Luke Parker
76a7160ea5 Correct clippy warnings
Currently intended to be done with:
cargo clippy --features "recommended merlin batch serialize experimental 
ed25519 ristretto p256 secp256k1 multisig" -- -A clippy::type_complexity 
-A dead_code
2022-07-22 02:35:17 -04:00
Luke Parker
3556584478 Correct missing escape sequences 2022-07-22 00:32:18 -04:00
Luke Parker
9f6eb205b0 Address review comments from #53 2022-07-21 23:30:51 -05:00
Luke Parker
e617783f09 Correct bullet point spacing 2022-07-21 23:30:51 -05:00
Luke Parker
146db6836e Update Validators doc per https://github.com/serai-dex/serai/issues/55 2022-07-21 23:30:51 -05:00
Luke Parker
cd8b116fd8 cargo fmt 2022-07-21 23:30:51 -05:00
Luke Parker
a733bb5865 Update the ink! contract to match docs 2022-07-21 23:30:51 -05:00
Luke Parker
375967b165 Correct table formatting and clarify network docs 2022-07-21 23:30:51 -05:00
Luke Parker
4186bc93a8 Update the Multisig documentation to be designed around Validator Sets 2022-07-21 23:30:51 -05:00
Luke Parker
1994dab634 Add documentation on Validator Sets 2022-07-21 23:30:51 -05:00
Luke Parker
d320af06a7 Rewrite the Validators spec
Moves Oraclization/Report to Consensus for now.
2022-07-21 23:30:51 -05:00
Luke Parker
1b461ca5be Split Validators and Consensus docs 2022-07-21 23:30:51 -05:00
Luke Parker
895fbae2dc Add a full success route test for the multisig contract 2022-07-21 23:30:51 -05:00
Luke Parker
21e555192c Add subsequent_vote test
This is the contracts/extension that triggered a Rust ICE, as noted in 
my issue there.
2022-07-21 23:30:51 -05:00
Luke Parker
aa0d364fc2 First passing multisig vote test 2022-07-21 23:30:51 -05:00
Luke Parker
43c4487804 Create a dedicated crate for the extension 2022-07-21 23:30:51 -05:00
Luke Parker
5583bf3447 Initial multisig tracking contract in ink 2022-07-21 23:30:51 -05:00
Luke Parker
be921ab2d3 Document various Scenarios
- Pong
- Wrap
- SRI -> BTC
- BTC -> Monero
- Add Liquidity (fresh)
- Add Liquidity (SRI holder)
2022-07-21 23:22:48 -05:00
Luke Parker
c48992ab94 Update according to comment 2022-07-21 23:22:48 -05:00
Luke Parker
f7f67f72a2 Correct link in Instructions 2022-07-21 23:22:48 -05:00
Luke Parker
c3ab201517 Document Serai's Application Calls and update Instructions accordingly 2022-07-21 23:22:48 -05:00
Luke Parker
9cc35a06ab Add authenticated calls to Ethereum
Also uses numbered lists for function descriptions.
2022-07-21 23:22:48 -05:00
Luke Parker
004086b85b Include origin as an Option in Shorthand
Converts (Network, Address) to Enum { Native(Address), Serai(Address) } 
as it's not valid to send Bitcoin to Ethereum.

Corrects a legacy comment regarding serialization.
2022-07-21 23:22:48 -05:00
Luke Parker
ae3525ca2c Document Instructions and various network's integrations
Tracking issue: https://github.com/serai-dex/serai/issues/57
2022-07-21 23:22:48 -05:00
silverpill
194c5acebb Fix compilation errors in monero-serai 2022-07-17 16:55:49 -05:00
Luke Parker
c0cac7591d Correct a missing fmt 2022-07-17 17:18:56 -04:00
Luke Parker
9cb2d8aa4a Integrate ink! 2022-07-16 21:06:54 -04:00
Luke Parker
314c9cd8f7 Clean Substrate Cargo.tomls 2022-07-16 20:53:28 -04:00
Luke Parker
2bddce2087 Add a patch for zip so ethereum-serai doesn't conflict with Substrate
Also commits the lock file and updates documentation.
2022-07-16 17:49:35 -04:00
noot
c589743e2b ethereum: implement schnorr verification contract deployment and related crypto (#36)
* basic schnorr verify working

* add schnorr-verify as submodule

* remove previous code

* Misc Ethereum work which will probably be disregarded

* add ecrecover hack test, worksgit add src/

* merge w develop

* starting w/ rust-web3

* trying to use ethers

* deploy_schnorr_verifier_contract finally working

* modify EthereumHram to use 27/28 for point parity

* updated address calc, solidity schnorr verify now working

* add verify failure to test

* update readme

* move ethereum/ to coins/

* un fmt coins/monero

* update .gitmodules

* fix cargo paths

* fix coins/monero

* add #[allow(non_snake_case)]

* un-fmt stuff

* move crypto to coins/ethereum

* move unit tests to ethereum/tests

* remove js, build w ethers

* update .gitignore

* address comments

* add q != 0 check

* update contract param order

* update contract license to AGPL

* update ethereum-serai license to GPL and fmt

* GPLv3 for ethereum-serai

* AGPLv3 for ethereum-serai

* actually fix license

Co-authored-by: Luke Parker <lukeparker5132@gmail.com>
2022-07-16 16:45:41 -05:00
Luke Parker
e67033a207 Apply an initial set of rustfmt rules 2022-07-16 15:16:30 -05:00
Luke Parker
0b879a53fa Add an initial Substrate instantiation
Consensus has been nuked for an AcceptAny currently routed throough PoW 
(when it doesn't have to be, doing so just took care of a few pieces of 
leg work).

Updates AGPL handling.
2022-07-15 00:05:00 -04:00
Luke Parker
5ede5b9e8f Update the DLEq proof for any amount of generators
The two-generator limit wasn't required nor beneficial. This does 
theoretically optimize FROST, yet not for any current constructions. A 
follow up proof which would optimize current constructions has been 
noted in #38.

Adds explicit no_std support to the core DLEq proof.

Closes #34.
2022-07-13 23:29:48 -04:00
Luke Parker
46975812c3 Add a copy of the AGPL license text to processor/ 2022-07-13 16:12:19 -04:00
630 changed files with 126828 additions and 6223 deletions

5
.gitattributes vendored Normal file
View File

@@ -0,0 +1,5 @@
# Auto detect text files and perform LF normalization
* text=auto
* text eol=lf
*.pdf binary

21
.github/actions/LICENSE vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

40
.github/actions/bitcoin/action.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: bitcoin-regtest
description: Spawns a regtest Bitcoin daemon
inputs:
version:
description: "Version to download and run"
required: false
default: 24.0.1
runs:
using: "composite"
steps:
- name: Bitcoin Daemon Cache
id: cache-bitcoind
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: bitcoin.tar.gz
key: bitcoind-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Bitcoin Daemon
if: steps.cache-bitcoind.outputs.cache-hit != 'true'
shell: bash
run: |
RUNNER_OS=linux
RUNNER_ARCH=x86_64
FILE=bitcoin-${{ inputs.version }}-$RUNNER_ARCH-$RUNNER_OS-gnu.tar.gz
wget https://bitcoincore.org/bin/bitcoin-core-${{ inputs.version }}/$FILE
mv $FILE bitcoin.tar.gz
- name: Extract the Bitcoin Daemon
shell: bash
run: |
tar xzvf bitcoin.tar.gz
cd bitcoin-${{ inputs.version }}
sudo mv bin/* /bin && sudo mv lib/* /lib
- name: Bitcoin Regtest Daemon
shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/coins/bitcoin/run.sh -daemon

View File

@@ -0,0 +1,49 @@
name: build-dependencies
description: Installs build dependencies for Serai
runs:
using: "composite"
steps:
- name: Remove unused packages
shell: bash
run: |
sudo apt remove -y "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
sudo apt remove -y "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
sudo apt remove -y "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
sudo apt remove -y "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
sudo apt autoremove -y
sudo apt clean
docker system prune -a --volumes
if: runner.os == 'Linux'
- name: Remove unused packages
shell: bash
run: |
(gem uninstall -aIx) || (exit 0)
brew uninstall --force "*msbuild*" "*powershell*" "*nuget*" "*bazel*" "*ansible*" "*terraform*" "*heroku*" "*aws*" azure-cli
brew uninstall --force "*nodejs*" "*npm*" "*yarn*" "*java*" "*kotlin*" "*golang*" "*swift*" "*julia*" "*fortran*" "*android*"
brew uninstall --force "*apache2*" "*nginx*" "*firefox*" "*chromium*" "*chrome*" "*edge*"
brew uninstall --force "*qemu*" "*sql*" "*texinfo*" "*imagemagick*"
brew cleanup
if: runner.os == 'macOS'
- name: Install dependencies
shell: bash
run: |
if [ "$RUNNER_OS" == "Linux" ]; then
sudo apt install -y ca-certificates protobuf-compiler
elif [ "$RUNNER_OS" == "Windows" ]; then
choco install protoc
elif [ "$RUNNER_OS" == "macOS" ]; then
brew install protobuf
fi
- name: Install solc
shell: bash
run: |
cargo install svm-rs
svm install 0.8.25
svm use 0.8.25
# - name: Cache Rust
# uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43

View File

@@ -0,0 +1,49 @@
name: monero-wallet-rpc
description: Spawns a Monero Wallet-RPC.
inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.1
runs:
using: "composite"
steps:
- name: Monero Wallet RPC Cache
id: cache-monero-wallet-rpc
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: monero-wallet-rpc
key: monero-wallet-rpc-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Monero Wallet RPC
if: steps.cache-monero-wallet-rpc.outputs.cache-hit != 'true'
# Calculates OS/ARCH to demonstrate it, yet then locks to linux-x64 due
# to the contained folder not following the same naming scheme and
# requiring further expansion not worth doing right now
shell: bash
run: |
RUNNER_OS=${{ runner.os }}
RUNNER_ARCH=${{ runner.arch }}
RUNNER_OS=${RUNNER_OS,,}
RUNNER_ARCH=${RUNNER_ARCH,,}
RUNNER_OS=linux
RUNNER_ARCH=x64
FILE=monero-$RUNNER_OS-$RUNNER_ARCH-${{ inputs.version }}.tar.bz2
wget https://downloads.getmonero.org/cli/$FILE
tar -xvf $FILE
mv monero-x86_64-linux-gnu-${{ inputs.version }}/monero-wallet-rpc monero-wallet-rpc
- name: Monero Wallet RPC
shell: bash
run: |
./monero-wallet-rpc --allow-mismatched-daemon-version \
--daemon-address 0.0.0.0:18081 --daemon-login serai:seraidex \
--disable-rpc-login --rpc-bind-port 18082 \
--wallet-dir ./ \
--detach

46
.github/actions/monero/action.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: monero-regtest
description: Spawns a regtest Monero daemon
inputs:
version:
description: "Version to download and run"
required: false
default: v0.18.3.1
runs:
using: "composite"
steps:
- name: Monero Daemon Cache
id: cache-monerod
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: /usr/bin/monerod
key: monerod-${{ runner.os }}-${{ runner.arch }}-${{ inputs.version }}
- name: Download the Monero Daemon
if: steps.cache-monerod.outputs.cache-hit != 'true'
# Calculates OS/ARCH to demonstrate it, yet then locks to linux-x64 due
# to the contained folder not following the same naming scheme and
# requiring further expansion not worth doing right now
shell: bash
run: |
RUNNER_OS=${{ runner.os }}
RUNNER_ARCH=${{ runner.arch }}
RUNNER_OS=${RUNNER_OS,,}
RUNNER_ARCH=${RUNNER_ARCH,,}
RUNNER_OS=linux
RUNNER_ARCH=x64
FILE=monero-$RUNNER_OS-$RUNNER_ARCH-${{ inputs.version }}.tar.bz2
wget https://downloads.getmonero.org/cli/$FILE
tar -xvf $FILE
sudo mv monero-x86_64-linux-gnu-${{ inputs.version }}/monerod /usr/bin/monerod
sudo chmod 777 /usr/bin/monerod
sudo chmod +x /usr/bin/monerod
- name: Monero Regtest Daemon
shell: bash
run: PATH=$PATH:/usr/bin ./orchestration/dev/coins/monero/run.sh --detach

View File

@@ -0,0 +1,38 @@
name: test-dependencies
description: Installs test dependencies for Serai
inputs:
monero-version:
description: "Monero version to download and run as a regtest node"
required: false
default: v0.18.3.1
bitcoin-version:
description: "Bitcoin version to download and run as a regtest node"
required: false
default: 24.0.1
runs:
using: "composite"
steps:
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install Foundry
uses: foundry-rs/foundry-toolchain@cb603ca0abb544f301eaed59ac0baf579aa6aecf
with:
version: nightly-09fe3e041369a816365a020f715ad6f94dbce9f2
cache: false
- name: Run a Monero Regtest Node
uses: ./.github/actions/monero
with:
version: ${{ inputs.monero-version }}
- name: Run a Bitcoin Regtest Node
uses: ./.github/actions/bitcoin
with:
version: ${{ inputs.bitcoin-version }}
- name: Run a Monero Wallet-RPC
uses: ./.github/actions/monero-wallet-rpc

1
.github/nightly-version vendored Normal file
View File

@@ -0,0 +1 @@
nightly-2024-02-07

35
.github/workflows/coins-tests.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: coins/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
workflow_dispatch:
jobs:
test-coins:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p bitcoin-serai \
-p ethereum-serai \
-p monero-generators \
-p monero-serai

31
.github/workflows/common-tests.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: common/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
pull_request:
paths:
- "common/**"
workflow_dispatch:
jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p std-shims \
-p zalloc \
-p serai-db \
-p serai-env

40
.github/workflows/coordinator-tests.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Coordinator Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "coordinator/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/coordinator/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "coordinator/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/coordinator/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run coordinator Docker tests
run: cd tests/coordinator && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

40
.github/workflows/crypto-tests.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: crypto/ Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
workflow_dispatch:
jobs:
test-crypto:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p flexible-transcript \
-p ff-group-tests \
-p dalek-ff-group \
-p minimal-ed448 \
-p ciphersuite \
-p multiexp \
-p schnorr-signatures \
-p dleq \
-p dkg \
-p modular-frost \
-p frost-schnorrkel

24
.github/workflows/daily-deny.yml vendored Normal file
View File

@@ -0,0 +1,24 @@
name: Daily Deny Check
on:
schedule:
- cron: "0 0 * * *"
jobs:
deny:
name: Run cargo deny
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
- name: Run cargo deny
run: cargo deny -L error --all-features check

22
.github/workflows/full-stack-tests.yml vendored Normal file
View File

@@ -0,0 +1,22 @@
name: Full Stack Tests
on:
push:
branches:
- develop
pull_request:
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Full Stack Docker tests
run: cd tests/full-stack && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

83
.github/workflows/lint.yml vendored Normal file
View File

@@ -0,0 +1,83 @@
name: Lint
on:
push:
branches:
- develop
pull_request:
workflow_dispatch:
jobs:
clippy:
strategy:
matrix:
os: [ubuntu-latest, macos-13, macos-14, windows-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -t wasm32-unknown-unknown -c clippy
- name: Run Clippy
run: cargo +${{ steps.nightly.outputs.version }} clippy --all-features --all-targets -- -D warnings -A clippy::items_after_test_module
# Also verify the lockfile isn't dirty
# This happens when someone edits a Cargo.toml yet doesn't do anything
# which causes the lockfile to be updated
# The above clippy run will cause it to be updated, so checking there's
# no differences present now performs the desired check
- name: Verify lockfile
shell: bash
run: git diff | wc -l | LC_ALL="en_US.utf8" grep -x -e "^[ ]*0"
deny:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Advisory Cache
uses: actions/cache@13aacd865c20de90d75de3b17ebe84f7a17d57d2
with:
path: ~/.cargo/advisory-db
key: rust-advisory-db
- name: Install cargo deny
run: cargo install --locked cargo-deny
- name: Run cargo deny
run: cargo deny -L error --all-features check
fmt:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Get nightly version to use
id: nightly
shell: bash
run: echo "version=$(cat .github/nightly-version)" >> $GITHUB_OUTPUT
- name: Install nightly rust
run: rustup toolchain install ${{ steps.nightly.outputs.version }} --profile minimal -c rustfmt
- name: Run rustfmt
run: cargo +${{ steps.nightly.outputs.version }} fmt -- --check
machete:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Verify all dependencies are in use
run: |
cargo install cargo-machete
cargo machete

View File

@@ -0,0 +1,36 @@
name: Message Queue Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "message-queue/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/message-queue/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "message-queue/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/message-queue/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run message-queue Docker tests
run: cd tests/message-queue && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

26
.github/workflows/mini-tests.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: mini/ Tests
on:
push:
branches:
- develop
paths:
- "mini/**"
pull_request:
paths:
- "mini/**"
workflow_dispatch:
jobs:
test-common:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p mini-serai

56
.github/workflows/monero-tests.yaml vendored Normal file
View File

@@ -0,0 +1,56 @@
name: Monero Tests
on:
push:
branches:
- develop
paths:
- "coins/monero/**"
- "processor/**"
pull_request:
paths:
- "coins/monero/**"
- "processor/**"
workflow_dispatch:
jobs:
# Only run these once since they will be consistent regardless of any node
unit-tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
- name: Run Unit Tests Without Features
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --lib
# Doesn't run unit tests with features as the tests workflow will
integration-tests:
runs-on: ubuntu-latest
# Test against all supported protocol versions
strategy:
matrix:
version: [v0.17.3.2, v0.18.2.0]
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Test Dependencies
uses: ./.github/actions/test-dependencies
with:
monero-version: ${{ matrix.version }}
- name: Run Integration Tests Without Features
# Runs with the binaries feature so the binaries build
# https://github.com/rust-lang/cargo/issues/8396
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --features binaries --test '*'
- name: Run Integration Tests
# Don't run if the the tests workflow also will
if: ${{ matrix.version != 'v0.18.2.0' }}
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --package monero-serai --all-features --test '*'

View File

@@ -0,0 +1,53 @@
name: Monthly Nightly Update
on:
schedule:
- cron: "0 0 1 * *"
jobs:
update:
name: Update nightly
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
with:
submodules: "recursive"
- name: Write nightly version
run: echo $(date +"nightly-%Y-%m"-01) > .github/nightly-version
- name: Create the commit
run: |
git config user.name "GitHub Actions"
git config user.email "<>"
git checkout -b $(date +"nightly-%Y-%m")
git add .github/nightly-version
git commit -m "Update nightly"
git push -u origin $(date +"nightly-%Y-%m")
- name: Pull Request
uses: actions/github-script@d7906e4ad0b1822421a7e6a35d5ca353c962f410
with:
script: |
const { repo, owner } = context.repo;
const result = await github.rest.pulls.create({
title: (new Date()).toLocaleString(
false,
{ month: "long", year: "numeric" }
) + " - Rust Nightly Update",
owner,
repo,
head: "nightly-" + (new Date()).toISOString().split("-").splice(0, 2).join("-"),
base: "develop",
body: "PR auto-generated by a GitHub workflow."
});
github.rest.issues.addLabels({
owner,
repo,
issue_number: result.data.number,
labels: ["improvement"]
});

35
.github/workflows/no-std.yml vendored Normal file
View File

@@ -0,0 +1,35 @@
name: no-std build
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "tests/no-std/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "tests/no-std/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Install RISC-V Toolchain
run: sudo apt update && sudo apt install -y gcc-riscv64-unknown-elf gcc-multilib && rustup target add riscv32imac-unknown-none-elf
- name: Verify no-std builds
run: cd tests/no-std && CFLAGS=-I/usr/include cargo build --target riscv32imac-unknown-none-elf

90
.github/workflows/pages.yml vendored Normal file
View File

@@ -0,0 +1,90 @@
# MIT License
#
# Copyright (c) 2022 just-the-docs
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# Sample workflow for building and deploying a Jekyll site to GitHub Pages
name: Deploy Jekyll site to Pages
on:
push:
branches:
- "develop"
paths:
- "docs/**"
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write
# Allow one concurrent deployment
concurrency:
group: "pages"
cancel-in-progress: true
jobs:
# Build job
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: docs
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Setup Ruby
uses: ruby/setup-ruby@v1
with:
bundler-cache: true
cache-version: 0
working-directory: "${{ github.workspace }}/docs"
- name: Setup Pages
id: pages
uses: actions/configure-pages@v3
- name: Build with Jekyll
run: bundle exec jekyll build --baseurl "${{ steps.pages.outputs.base_path }}"
env:
JEKYLL_ENV: production
- name: Upload artifact
uses: actions/upload-pages-artifact@v1
with:
path: "docs/_site/"
# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2

40
.github/workflows/processor-tests.yml vendored Normal file
View File

@@ -0,0 +1,40 @@
name: Processor Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "processor/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/processor/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "processor/**"
- "orchestration/**"
- "tests/docker/**"
- "tests/processor/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run processor Docker tests
run: cd tests/processor && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

View File

@@ -0,0 +1,36 @@
name: Reproducible Runtime
on:
push:
branches:
- develop
paths:
- "Cargo.lock"
- "common/**"
- "crypto/**"
- "substrate/**"
- "orchestration/runtime/**"
- "tests/reproducible-runtime/**"
pull_request:
paths:
- "Cargo.lock"
- "common/**"
- "crypto/**"
- "substrate/**"
- "orchestration/runtime/**"
- "tests/reproducible-runtime/**"
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Install Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Reproducible Runtime tests
run: cd tests/reproducible-runtime && GITHUB_CI=true RUST_BACKTRACE=1 cargo test

80
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,80 @@
name: Tests
on:
push:
branches:
- develop
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
- "substrate/**"
pull_request:
paths:
- "common/**"
- "crypto/**"
- "coins/**"
- "message-queue/**"
- "processor/**"
- "coordinator/**"
- "substrate/**"
workflow_dispatch:
jobs:
test-infra:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-message-queue \
-p serai-processor-messages \
-p serai-processor \
-p tendermint-machine \
-p tributary-chain \
-p serai-coordinator \
-p serai-docker-tests
test-substrate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: |
GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features \
-p serai-primitives \
-p serai-coins-primitives \
-p serai-coins-pallet \
-p serai-dex-pallet \
-p serai-validator-sets-primitives \
-p serai-validator-sets-pallet \
-p serai-in-instructions-primitives \
-p serai-in-instructions-pallet \
-p serai-signals-pallet \
-p serai-runtime \
-p serai-node
test-serai-client:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac
- name: Build Dependencies
uses: ./.github/actions/build-dependencies
- name: Run Tests
run: GITHUB_CI=true RUST_BACKTRACE=1 cargo test --all-features -p serai-client

7
.gitignore vendored
View File

@@ -1,2 +1,7 @@
target
Cargo.lock
Dockerfile
Dockerfile.fast-epoch
!orchestration/runtime/Dockerfile
.test-logs
.vscode

3
.gitmodules vendored
View File

@@ -1,3 +0,0 @@
[submodule "coins/monero/c/monero"]
path = coins/monero/c/monero
url = https://github.com/monero-project/monero

17
.rustfmt.toml Normal file
View File

@@ -0,0 +1,17 @@
edition = "2021"
tab_spaces = 2
max_width = 100
# Let the developer decide based on the 100 char line limit
use_small_heuristics = "Max"
error_on_line_overflow = true
error_on_unformatted = true
imports_granularity = "Crate"
reorder_imports = false
reorder_modules = false
unstable_features = true
spaces_around_ranges = true
binop_separator = "Back"

661
AGPL-3.0 Normal file
View File

@@ -0,0 +1,661 @@
GNU AFFERO GENERAL PUBLIC LICENSE
Version 3, 19 November 2007
Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
An older license, called the Affero General Public License and
published by Affero, was designed to accomplish similar goals. This is
a different license, not a version of the Affero GPL, but Affero has
released a new version of the Affero GPL which permits relicensing under
this license.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU Affero General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

37
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,37 @@
# Contributing
Contributions come in a variety of forms. Developing Serai, helping document it,
using its libraries in another project, using and testing it, and simply sharing
it are all valuable ways of contributing.
This document will specifically focus on contributions to this repository in the
form of code and documentation.
### Rules
- Stable native Rust, nightly wasm and tools.
- `cargo fmt` must be used.
- `cargo clippy` must pass, except for the ignored rules (`type_complexity` and
`dead_code`).
- The CI must pass.
- Only use uppercase variable names when relevant to cryptography.
- Use a two-space ident when possible.
- Put a space after comment markers.
- Don't use multiple newlines between sections of code.
- Have a newline before EOF.
### Guidelines
- Sort inputs as core, std, third party, and then Serai.
- Comment code reasonably.
- Include tests for new features.
- Sign commits.
### Submission
All submissions should be through GitHub. Contributions to a crate will be
licensed according to the crate's existing license, with the crate's copyright
holders (distinct from authors) having the right to re-license the crate via a
unanimous decision.

10616
Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -1,15 +1,183 @@
[workspace]
resolver = "2"
members = [
# Version patches
"patches/zstd",
"patches/rocksdb",
"patches/proc-macro-crate",
# std patches
"patches/matches",
"patches/is-terminal",
# Rewrites/redirects
"patches/option-ext",
"patches/directories-next",
"common/std-shims",
"common/zalloc",
"common/db",
"common/env",
"common/request",
"crypto/transcript",
"crypto/ff-group-tests",
"crypto/dalek-ff-group",
"crypto/ed448",
"crypto/ciphersuite",
"crypto/multiexp",
"crypto/schnorr",
"crypto/dleq",
"crypto/dkg",
"crypto/frost",
"crypto/schnorrkel",
"coins/bitcoin",
"coins/ethereum",
"coins/monero/generators",
"coins/monero",
"message-queue",
"processor/messages",
"processor",
"coordinator/tributary/tendermint",
"coordinator/tributary",
"coordinator",
"substrate/primitives",
"substrate/coins/primitives",
"substrate/coins/pallet",
"substrate/in-instructions/primitives",
"substrate/in-instructions/pallet",
"substrate/validator-sets/primitives",
"substrate/validator-sets/pallet",
"substrate/signals/primitives",
"substrate/signals/pallet",
"substrate/abi",
"substrate/runtime",
"substrate/node",
"substrate/client",
"orchestration",
"mini",
"tests/no-std",
"tests/docker",
"tests/message-queue",
"tests/processor",
"tests/coordinator",
"tests/full-stack",
"tests/reproducible-runtime",
]
# Always compile Monero (and a variety of dependencies) with optimizations due
# to the extensive operations required for Bulletproofs
[profile.dev.package]
subtle = { opt-level = 3 }
curve25519-dalek = { opt-level = 3 }
ff = { opt-level = 3 }
group = { opt-level = 3 }
crypto-bigint = { opt-level = 3 }
dalek-ff-group = { opt-level = 3 }
minimal-ed448 = { opt-level = 3 }
multiexp = { opt-level = 3 }
monero-serai = { opt-level = 3 }
[profile.release]
panic = "unwind"
[patch.crates-io]
# https://github.com/rust-lang-nursery/lazy-static.rs/issues/201
lazy_static = { git = "https://github.com/rust-lang-nursery/lazy-static.rs", rev = "5735630d46572f1e5377c8f2ba0f79d18f53b10c" }
# Needed due to dockertest's usage of `Rc`s when we need `Arc`s
dockertest = { git = "https://github.com/kayabaNerve/dockertest-rs", branch = "arc" }
# wasmtime pulls in an old version for this
zstd = { path = "patches/zstd" }
# Needed for WAL compression
rocksdb = { path = "patches/rocksdb" }
# proc-macro-crate 2 binds to an old version of toml for msrv so we patch to 3
proc-macro-crate = { path = "patches/proc-macro-crate" }
# is-terminal now has an std-based solution with an equivalent API
is-terminal = { path = "patches/is-terminal" }
# So does matches
matches = { path = "patches/matches" }
# directories-next was created because directories was unmaintained
# directories-next is now unmaintained while directories is maintained
# The directories author pulls in ridiculously pointless crates and prefers
# copyleft licenses
# The following two patches resolve everything
option-ext = { path = "patches/option-ext" }
directories-next = { path = "patches/directories-next" }
[workspace.lints.clippy]
unwrap_or_default = "allow"
borrow_as_ptr = "deny"
cast_lossless = "deny"
cast_possible_truncation = "deny"
cast_possible_wrap = "deny"
cast_precision_loss = "deny"
cast_ptr_alignment = "deny"
cast_sign_loss = "deny"
checked_conversions = "deny"
cloned_instead_of_copied = "deny"
enum_glob_use = "deny"
expl_impl_clone_on_copy = "deny"
explicit_into_iter_loop = "deny"
explicit_iter_loop = "deny"
flat_map_option = "deny"
float_cmp = "deny"
fn_params_excessive_bools = "deny"
ignored_unit_patterns = "deny"
implicit_clone = "deny"
inefficient_to_string = "deny"
invalid_upcast_comparisons = "deny"
large_stack_arrays = "deny"
linkedlist = "deny"
macro_use_imports = "deny"
manual_instant_elapsed = "deny"
manual_let_else = "deny"
manual_ok_or = "deny"
manual_string_new = "deny"
map_unwrap_or = "deny"
match_bool = "deny"
match_same_arms = "deny"
missing_fields_in_debug = "deny"
needless_continue = "deny"
needless_pass_by_value = "deny"
ptr_cast_constness = "deny"
range_minus_one = "deny"
range_plus_one = "deny"
redundant_closure_for_method_calls = "deny"
redundant_else = "deny"
string_add_assign = "deny"
unchecked_duration_subtraction = "deny"
uninlined_format_args = "deny"
unnecessary_box_returns = "deny"
unnecessary_join = "deny"
unnecessary_wraps = "deny"
unnested_or_patterns = "deny"
unused_async = "deny"
unused_self = "deny"
zero_sized_map_values = "deny"

8
LICENSE Normal file
View File

@@ -0,0 +1,8 @@
Serai crates are licensed under one of two licenses, either MIT or AGPL-3.0,
depending on the crate in question. Each crate declares their license in their
`Cargo.toml` and includes a `LICENSE` file detailing its status. Additionally,
a full copy of the AGPL-3.0 License is included in the root of this repository
as a reference text. This copy should be provided with any distribution of a
crate licensed under the AGPL-3.0, as per its terms.
The GitHub actions (`.github/actions`) are licensed under the MIT license.

View File

@@ -1,22 +1,66 @@
# Serai
Serai is a new DEX, built from the ground up, initially planning on listing
Bitcoin, Ethereum, Monero, DAI, and USDC, offering a liquidity pool trading
experience. Funds are stored in an economically secured threshold multisig
Bitcoin, Ethereum, DAI, and Monero, offering a liquidity-pool-based trading
experience. Funds are stored in an economically secured threshold-multisig
wallet.
[Getting Started](spec/Getting%20Started.md)
### Layout
- `docs` - Documentation on the Serai protocol.
- `audits`: Audits for various parts of Serai.
- `coins` - Various coin libraries intended for usage in Serai yet also by the
wider community. This means they will always support the functionality Serai
needs, yet won't disadvantage other use cases when possible.
- `spec`: The specification of the Serai protocol, both internally and as
networked.
- `crypto` - A series of composable cryptographic libraries built around the
`ff`/`group` APIs achieving a variety of tasks. These range from generic
- `docs`: User-facing documentation on the Serai protocol.
- `common`: Crates containing utilities common to a variety of areas under
Serai, none neatly fitting under another category.
- `crypto`: A series of composable cryptographic libraries built around the
`ff`/`group` APIs, achieving a variety of tasks. These range from generic
infrastructure, to our IETF-compliant FROST implementation, to a DLEq proof as
needed for Bitcoin-Monero atomic swaps.
- `processor` - A generic chain processor to process data for Serai and process
- `coins`: Various coin libraries intended for usage in Serai yet also by the
wider community. This means they will always support the functionality Serai
needs, yet won't disadvantage other use cases when possible.
- `message-queue`: An ordered message server so services can talk to each other,
even when the other is offline.
- `processor`: A generic chain processor to process data for Serai and process
events from Serai, executing transactions as expected and needed.
- `coordinator`: A service to manage processors and communicate over a P2P
network with other validators.
- `substrate`: Substrate crates used to instantiate the Serai network.
- `orchestration`: Dockerfiles and scripts to deploy a Serai node/test
environment.
- `tests`: Tests for various crates. Generally, `crate/src/tests` is used, or
`crate/tests`, yet any tests requiring crates' binaries are placed here.
### Security
Serai hosts a bug bounty program via
[Immunefi](https://immunefi.com/bounty/serai/). For in-scope critical
vulnerabilities, we will reward whitehats with up to $30,000.
Anything not in-scope should still be submitted through Immunefi, with rewards
issued at the discretion of the Immunefi program managers.
### Links
- [Website](https://serai.exchange/): https://serai.exchange/
- [Immunefi](https://immunefi.com/bounty/serai/): https://immunefi.com/bounty/serai/
- [Twitter](https://twitter.com/SeraiDEX): https://twitter.com/SeraiDEX
- [Mastodon](https://cryptodon.lol/@serai): https://cryptodon.lol/@serai
- [Discord](https://discord.gg/mpEUtJR3vz): https://discord.gg/mpEUtJR3vz
- [Matrix](https://matrix.to/#/#serai:matrix.org): https://matrix.to/#/#serai:matrix.org
- [Reddit](https://www.reddit.com/r/SeraiDEX/): https://www.reddit.com/r/SeraiDEX/
- [Telegram](https://t.me/SeraiDEX): https://t.me/SeraiDEX

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Cypher Stack
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,6 @@
# Cypher Stack /coins/bitcoin Audit, August 2023
This audit was over the /coins/bitcoin folder. It is encompassing up to commit
5121ca75199dff7bd34230880a1fdd793012068c.
Please see https://github.com/cypherstack/serai-btc-audit for provenance.

Binary file not shown.

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2023 Cypher Stack
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,7 @@
# Cypher Stack /crypto Audit, March 2023
This audit was over the /crypto folder, excluding the ed448 crate, the `Ed448`
ciphersuite in the ciphersuite crate, and the `dleq/experimental` feature. It is
encompassing up to commit 669d2dbffc1dafb82a09d9419ea182667115df06.
Please see https://github.com/cypherstack/serai-audit for provenance.

68
coins/bitcoin/Cargo.toml Normal file
View File

@@ -0,0 +1,68 @@
[package]
name = "bitcoin-serai"
version = "0.3.0"
description = "A Bitcoin library for FROST-signing transactions"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/bitcoin"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Vrx <vrx00@proton.me>"]
edition = "2021"
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
std-shims = { version = "0.1.1", path = "../../common/std-shims", default-features = false }
thiserror = { version = "1", default-features = false, optional = true }
zeroize = { version = "^1.5", default-features = false }
rand_core = { version = "0.6", default-features = false }
bitcoin = { version = "0.31", default-features = false, features = ["no-std"] }
k256 = { version = "^0.13.1", default-features = false, features = ["arithmetic", "bits"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.8", default-features = false, features = ["secp256k1"], optional = true }
hex = { version = "0.4", default-features = false, optional = true }
serde = { version = "1", default-features = false, features = ["derive"], optional = true }
serde_json = { version = "1", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.1", default-features = false, features = ["tls", "basic-auth"], optional = true }
[dev-dependencies]
secp256k1 = { version = "0.28", default-features = false, features = ["std"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["tests"] }
tokio = { version = "1", features = ["macros"] }
[features]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"rand_core/std",
"bitcoin/std",
"bitcoin/serde",
"k256/std",
"transcript/std",
"frost",
"hex/std",
"serde/std",
"serde_json/std",
"simple-request",
]
hazmat = []
default = ["std"]

21
coins/bitcoin/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

4
coins/bitcoin/README.md Normal file
View File

@@ -0,0 +1,4 @@
# bitcoin-serai
An application of [modular-frost](https://docs.rs/modular-frost) to Bitcoin
transactions, enabling extremely-efficient multisigs.

166
coins/bitcoin/src/crypto.rs Normal file
View File

@@ -0,0 +1,166 @@
use k256::{
elliptic_curve::sec1::{Tag, ToEncodedPoint},
ProjectivePoint,
};
use bitcoin::key::XOnlyPublicKey;
/// Get the x coordinate of a non-infinity, even point. Panics on invalid input.
pub fn x(key: &ProjectivePoint) -> [u8; 32] {
let encoded = key.to_encoded_point(true);
assert_eq!(encoded.tag(), Tag::CompressedEvenY, "x coordinate of odd key");
(*encoded.x().expect("point at infinity")).into()
}
/// Convert a non-infinity even point to a XOnlyPublicKey. Panics on invalid input.
pub fn x_only(key: &ProjectivePoint) -> XOnlyPublicKey {
XOnlyPublicKey::from_slice(&x(key)).expect("x_only was passed a point which was infinity or odd")
}
/// Make a point even by adding the generator until it is even.
///
/// Returns the even point and the amount of additions required.
#[cfg(any(feature = "std", feature = "hazmat"))]
pub fn make_even(mut key: ProjectivePoint) -> (ProjectivePoint, u64) {
let mut c = 0;
while key.to_encoded_point(true).tag() == Tag::CompressedOddY {
key += ProjectivePoint::GENERATOR;
c += 1;
}
(key, c)
}
#[cfg(feature = "std")]
mod frost_crypto {
use core::fmt::Debug;
use std_shims::{vec::Vec, io};
use zeroize::Zeroizing;
use rand_core::{RngCore, CryptoRng};
use bitcoin::hashes::{HashEngine, Hash, sha256::Hash as Sha256};
use transcript::Transcript;
use k256::{elliptic_curve::ops::Reduce, U256, Scalar};
use frost::{
curve::{Ciphersuite, Secp256k1},
Participant, ThresholdKeys, ThresholdView, FrostError,
algorithm::{Hram as HramTrait, Algorithm, Schnorr as FrostSchnorr},
};
use super::*;
/// A BIP-340 compatible HRAm for use with the modular-frost Schnorr Algorithm.
///
/// If passed an odd nonce, it will have the generator added until it is even.
///
/// If the key is odd, this will panic.
#[derive(Clone, Copy, Debug)]
pub struct Hram;
#[allow(non_snake_case)]
impl HramTrait<Secp256k1> for Hram {
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
// Convert the nonce to be even
let (R, _) = make_even(*R);
const TAG_HASH: Sha256 = Sha256::const_hash(b"BIP0340/challenge");
let mut data = Sha256::engine();
data.input(TAG_HASH.as_ref());
data.input(TAG_HASH.as_ref());
data.input(&x(&R));
data.input(&x(A));
data.input(m);
Scalar::reduce(U256::from_be_slice(Sha256::from_engine(data).as_ref()))
}
}
/// BIP-340 Schnorr signature algorithm.
///
/// This must be used with a ThresholdKeys whose group key is even. If it is odd, this will panic.
#[derive(Clone)]
pub struct Schnorr<T: Sync + Clone + Debug + Transcript>(FrostSchnorr<Secp256k1, T, Hram>);
impl<T: Sync + Clone + Debug + Transcript> Schnorr<T> {
/// Construct a Schnorr algorithm continuing the specified transcript.
pub fn new(transcript: T) -> Schnorr<T> {
Schnorr(FrostSchnorr::new(transcript))
}
}
impl<T: Sync + Clone + Debug + Transcript> Algorithm<Secp256k1> for Schnorr<T> {
type Transcript = T;
type Addendum = ();
type Signature = [u8; 64];
fn transcript(&mut self) -> &mut Self::Transcript {
self.0.transcript()
}
fn nonces(&self) -> Vec<Vec<ProjectivePoint>> {
self.0.nonces()
}
fn preprocess_addendum<R: RngCore + CryptoRng>(
&mut self,
rng: &mut R,
keys: &ThresholdKeys<Secp256k1>,
) {
self.0.preprocess_addendum(rng, keys)
}
fn read_addendum<R: io::Read>(&self, reader: &mut R) -> io::Result<Self::Addendum> {
self.0.read_addendum(reader)
}
fn process_addendum(
&mut self,
view: &ThresholdView<Secp256k1>,
i: Participant,
addendum: (),
) -> Result<(), FrostError> {
self.0.process_addendum(view, i, addendum)
}
fn sign_share(
&mut self,
params: &ThresholdView<Secp256k1>,
nonce_sums: &[Vec<<Secp256k1 as Ciphersuite>::G>],
nonces: Vec<Zeroizing<<Secp256k1 as Ciphersuite>::F>>,
msg: &[u8],
) -> <Secp256k1 as Ciphersuite>::F {
self.0.sign_share(params, nonce_sums, nonces, msg)
}
#[must_use]
fn verify(
&self,
group_key: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
sum: Scalar,
) -> Option<Self::Signature> {
self.0.verify(group_key, nonces, sum).map(|mut sig| {
// Make the R of the final signature even
let offset;
(sig.R, offset) = make_even(sig.R);
// s = r + cx. Since we added to the r, add to s
sig.s += Scalar::from(offset);
// Convert to a Bitcoin signature by dropping the byte for the point's sign bit
sig.serialize()[1 ..].try_into().unwrap()
})
}
fn verify_share(
&self,
verification_share: ProjectivePoint,
nonces: &[Vec<ProjectivePoint>],
share: Scalar,
) -> Result<Vec<(Scalar, ProjectivePoint)>, ()> {
self.0.verify_share(verification_share, nonces, share)
}
}
}
#[cfg(feature = "std")]
pub use frost_crypto::*;

24
coins/bitcoin/src/lib.rs Normal file
View File

@@ -0,0 +1,24 @@
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
extern crate alloc;
/// The bitcoin Rust library.
pub use bitcoin;
/// Cryptographic helpers.
#[cfg(feature = "hazmat")]
pub mod crypto;
#[cfg(not(feature = "hazmat"))]
pub(crate) mod crypto;
/// Wallet functionality to create transactions.
pub mod wallet;
/// A minimal asynchronous Bitcoin RPC client.
#[cfg(feature = "std")]
pub mod rpc;
#[cfg(test)]
mod tests;

226
coins/bitcoin/src/rpc.rs Normal file
View File

@@ -0,0 +1,226 @@
use core::fmt::Debug;
use std::collections::HashSet;
use thiserror::Error;
use serde::{Deserialize, de::DeserializeOwned};
use serde_json::json;
use simple_request::{hyper, Request, Client};
use bitcoin::{
hashes::{Hash, hex::FromHex},
consensus::encode,
Txid, Transaction, BlockHash, Block,
};
#[derive(Clone, PartialEq, Eq, Debug, Deserialize)]
pub struct Error {
code: isize,
message: String,
}
#[derive(Clone, Debug, Deserialize)]
#[serde(untagged)]
enum RpcResponse<T> {
Ok { result: T },
Err { error: Error },
}
/// A minimal asynchronous Bitcoin RPC client.
#[derive(Clone, Debug)]
pub struct Rpc {
client: Client,
url: String,
}
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum RpcError {
#[error("couldn't connect to node")]
ConnectionError,
#[error("request had an error: {0:?}")]
RequestError(Error),
#[error("node replied with invalid JSON")]
InvalidJson(serde_json::error::Category),
#[error("node sent an invalid response ({0})")]
InvalidResponse(&'static str),
#[error("node was missing expected methods")]
MissingMethods(HashSet<&'static str>),
}
impl Rpc {
/// Create a new connection to a Bitcoin RPC.
///
/// An RPC call is performed to ensure the node is reachable (and that an invalid URL wasn't
/// provided).
///
/// Additionally, a set of expected methods is checked to be offered by the Bitcoin RPC. If these
/// methods aren't provided, an error with the missing methods is returned. This ensures all RPC
/// routes explicitly provided by this library are at least possible.
///
/// Each individual RPC route may still fail at time-of-call, regardless of the arguments
/// provided to this library, if the RPC has an incompatible argument layout. That is not checked
/// at time of RPC creation.
pub async fn new(url: String) -> Result<Rpc, RpcError> {
let rpc = Rpc { client: Client::with_connection_pool(), url };
// Make an RPC request to verify the node is reachable and sane
let res: String = rpc.rpc_call("help", json!([])).await?;
// Verify all methods we expect are present
// If we had a more expanded RPC, due to differences in RPC versions, it wouldn't make sense to
// error if all methods weren't present
// We only provide a very minimal set of methods which have been largely consistent, hence why
// this is sane
let mut expected_methods = HashSet::from([
"help",
"getblockcount",
"getblockhash",
"getblockheader",
"getblock",
"sendrawtransaction",
"getrawtransaction",
]);
for line in res.split('\n') {
// This doesn't check if the arguments are as expected
// This is due to Bitcoin supporting a large amount of optional arguments, which
// occasionally change, with their own mechanism of text documentation, making matching off
// it a quite involved task
// Instead, once we've confirmed the methods are present, we assume our arguments are aligned
// Else we'll error at time of call
if expected_methods.remove(line.split(' ').next().unwrap_or("")) &&
expected_methods.is_empty()
{
break;
}
}
if !expected_methods.is_empty() {
Err(RpcError::MissingMethods(expected_methods))?;
};
Ok(rpc)
}
/// Perform an arbitrary RPC call.
pub async fn rpc_call<Response: DeserializeOwned + Debug>(
&self,
method: &str,
params: serde_json::Value,
) -> Result<Response, RpcError> {
let mut request = Request::from(
hyper::Request::post(&self.url)
.header("Content-Type", "application/json")
.body(
serde_json::to_vec(&json!({ "jsonrpc": "2.0", "method": method, "params": params }))
.unwrap()
.into(),
)
.unwrap(),
);
request.with_basic_auth();
let mut res = self
.client
.request(request)
.await
.map_err(|_| RpcError::ConnectionError)?
.body()
.await
.map_err(|_| RpcError::ConnectionError)?;
let res: RpcResponse<Response> =
serde_json::from_reader(&mut res).map_err(|e| RpcError::InvalidJson(e.classify()))?;
match res {
RpcResponse::Ok { result } => Ok(result),
RpcResponse::Err { error } => Err(RpcError::RequestError(error)),
}
}
/// Get the latest block's number.
///
/// The genesis block's 'number' is zero. They increment from there.
pub async fn get_latest_block_number(&self) -> Result<usize, RpcError> {
// getblockcount doesn't return the amount of blocks on the current chain, yet the "height"
// of the current chain. The "height" of the current chain is defined as the "height" of the
// tip block of the current chain. The "height" of a block is defined as the amount of blocks
// present when the block was created. Accordingly, the genesis block has height 0, and
// getblockcount will return 0 when it's only the only block, despite their being one block.
self.rpc_call("getblockcount", json!([])).await
}
/// Get the hash of a block by the block's number.
pub async fn get_block_hash(&self, number: usize) -> Result<[u8; 32], RpcError> {
let mut hash = self
.rpc_call::<BlockHash>("getblockhash", json!([number]))
.await?
.as_raw_hash()
.to_byte_array();
// bitcoin stores the inner bytes in reverse order.
hash.reverse();
Ok(hash)
}
/// Get a block's number by its hash.
pub async fn get_block_number(&self, hash: &[u8; 32]) -> Result<usize, RpcError> {
#[derive(Deserialize, Debug)]
struct Number {
height: usize,
}
Ok(self.rpc_call::<Number>("getblockheader", json!([hex::encode(hash)])).await?.height)
}
/// Get a block by its hash.
pub async fn get_block(&self, hash: &[u8; 32]) -> Result<Block, RpcError> {
let hex = self.rpc_call::<String>("getblock", json!([hex::encode(hash), 0])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the block"))?;
let block: Block = encode::deserialize(&bytes)
.map_err(|_| RpcError::InvalidResponse("node sent an improperly serialized block"))?;
let mut block_hash = *block.block_hash().as_raw_hash().as_byte_array();
block_hash.reverse();
if hash != &block_hash {
Err(RpcError::InvalidResponse("node replied with a different block"))?;
}
Ok(block)
}
/// Publish a transaction.
pub async fn send_raw_transaction(&self, tx: &Transaction) -> Result<Txid, RpcError> {
let txid = match self.rpc_call("sendrawtransaction", json!([encode::serialize_hex(tx)])).await {
Ok(txid) => txid,
Err(e) => {
// A const from Bitcoin's bitcoin/src/rpc/protocol.h
const RPC_VERIFY_ALREADY_IN_CHAIN: isize = -27;
// If this was already successfully published, consider this having succeeded
if let RpcError::RequestError(Error { code, .. }) = e {
if code == RPC_VERIFY_ALREADY_IN_CHAIN {
return Ok(tx.txid());
}
}
Err(e)?
}
};
if txid != tx.txid() {
Err(RpcError::InvalidResponse("returned TX ID inequals calculated TX ID"))?;
}
Ok(txid)
}
/// Get a transaction by its hash.
pub async fn get_transaction(&self, hash: &[u8; 32]) -> Result<Transaction, RpcError> {
let hex = self.rpc_call::<String>("getrawtransaction", json!([hex::encode(hash)])).await?;
let bytes: Vec<u8> = FromHex::from_hex(&hex)
.map_err(|_| RpcError::InvalidResponse("node didn't use hex to encode the transaction"))?;
let tx: Transaction = encode::deserialize(&bytes)
.map_err(|_| RpcError::InvalidResponse("node sent an improperly serialized transaction"))?;
let mut tx_hash = *tx.txid().as_raw_hash().as_byte_array();
tx_hash.reverse();
if hash != &tx_hash {
Err(RpcError::InvalidResponse("node replied with a different transaction"))?;
}
Ok(tx)
}
}

View File

@@ -0,0 +1,46 @@
use rand_core::OsRng;
use secp256k1::{Secp256k1 as BContext, Message, schnorr::Signature};
use k256::Scalar;
use transcript::{Transcript, RecommendedTranscript};
use frost::{
curve::Secp256k1,
Participant,
tests::{algorithm_machines, key_gen, sign},
};
use crate::{
bitcoin::hashes::{Hash as HashTrait, sha256::Hash},
crypto::{x_only, make_even, Schnorr},
};
#[test]
fn test_algorithm() {
let mut keys = key_gen::<_, Secp256k1>(&mut OsRng);
const MESSAGE: &[u8] = b"Hello, World!";
for keys in keys.values_mut() {
let (_, offset) = make_even(keys.group_key());
*keys = keys.offset(Scalar::from(offset));
}
let algo =
Schnorr::<RecommendedTranscript>::new(RecommendedTranscript::new(b"bitcoin-serai sign test"));
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, &keys),
Hash::hash(MESSAGE).as_ref(),
);
BContext::new()
.verify_schnorr(
&Signature::from_slice(&sig)
.expect("couldn't convert produced signature to secp256k1::Signature"),
&Message::from(Hash::hash(MESSAGE)),
&x_only(&keys[&Participant::new(1).unwrap()].group_key()),
)
.unwrap()
}

View File

@@ -0,0 +1 @@
mod crypto;

View File

@@ -0,0 +1,188 @@
use std_shims::{
vec::Vec,
collections::HashMap,
io::{self, Write},
};
#[cfg(feature = "std")]
use std_shims::io::Read;
use k256::{
elliptic_curve::sec1::{Tag, ToEncodedPoint},
Scalar, ProjectivePoint,
};
#[cfg(feature = "std")]
use frost::{
curve::{Ciphersuite, Secp256k1},
ThresholdKeys,
};
use bitcoin::{
consensus::encode::serialize, key::TweakedPublicKey, address::Payload, OutPoint, ScriptBuf,
TxOut, Transaction, Block,
};
#[cfg(feature = "std")]
use bitcoin::consensus::encode::Decodable;
use crate::crypto::x_only;
#[cfg(feature = "std")]
use crate::crypto::make_even;
#[cfg(feature = "std")]
mod send;
#[cfg(feature = "std")]
pub use send::*;
/// Tweak keys to ensure they're usable with Bitcoin.
///
/// Taproot keys, which these keys are used as, must be even. This offsets the keys until they're
/// even.
#[cfg(feature = "std")]
pub fn tweak_keys(keys: &ThresholdKeys<Secp256k1>) -> ThresholdKeys<Secp256k1> {
let (_, offset) = make_even(keys.group_key());
keys.offset(Scalar::from(offset))
}
/// Return the Taproot address payload for a public key.
///
/// If the key is odd, this will return None.
pub fn address_payload(key: ProjectivePoint) -> Option<Payload> {
if key.to_encoded_point(true).tag() != Tag::CompressedEvenY {
return None;
}
Some(Payload::p2tr_tweaked(TweakedPublicKey::dangerous_assume_tweaked(x_only(&key))))
}
/// A spendable output.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct ReceivedOutput {
// The scalar offset to obtain the key usable to spend this output.
offset: Scalar,
// The output to spend.
output: TxOut,
// The TX ID and vout of the output to spend.
outpoint: OutPoint,
}
impl ReceivedOutput {
/// The offset for this output.
pub fn offset(&self) -> Scalar {
self.offset
}
/// The Bitcoin output for this output.
pub fn output(&self) -> &TxOut {
&self.output
}
/// The outpoint for this output.
pub fn outpoint(&self) -> &OutPoint {
&self.outpoint
}
/// The value of this output.
pub fn value(&self) -> u64 {
self.output.value.to_sat()
}
/// Read a ReceivedOutput from a generic satisfying Read.
#[cfg(feature = "std")]
pub fn read<R: Read>(r: &mut R) -> io::Result<ReceivedOutput> {
Ok(ReceivedOutput {
offset: Secp256k1::read_F(r)?,
output: TxOut::consensus_decode(r).map_err(|_| io::Error::other("invalid TxOut"))?,
outpoint: OutPoint::consensus_decode(r).map_err(|_| io::Error::other("invalid OutPoint"))?,
})
}
/// Write a ReceivedOutput to a generic satisfying Write.
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
w.write_all(&self.offset.to_bytes())?;
w.write_all(&serialize(&self.output))?;
w.write_all(&serialize(&self.outpoint))
}
/// Serialize a ReceivedOutput to a `Vec<u8>`.
pub fn serialize(&self) -> Vec<u8> {
let mut res = Vec::new();
self.write(&mut res).unwrap();
res
}
}
/// A transaction scanner capable of being used with HDKD schemes.
#[derive(Clone, Debug)]
pub struct Scanner {
key: ProjectivePoint,
scripts: HashMap<ScriptBuf, Scalar>,
}
impl Scanner {
/// Construct a Scanner for a key.
///
/// Returns None if this key can't be scanned for.
pub fn new(key: ProjectivePoint) -> Option<Scanner> {
let mut scripts = HashMap::new();
scripts.insert(address_payload(key)?.script_pubkey(), Scalar::ZERO);
Some(Scanner { key, scripts })
}
/// Register an offset to scan for.
///
/// Due to Bitcoin's requirement that points are even, not every offset may be used.
/// If an offset isn't usable, it will be incremented until it is. If this offset is already
/// present, None is returned. Else, Some(offset) will be, with the used offset.
///
/// This means offsets are surjective, not bijective, and the order offsets are registered in
/// may determine the validity of future offsets.
pub fn register_offset(&mut self, mut offset: Scalar) -> Option<Scalar> {
// This loop will terminate as soon as an even point is found, with any point having a ~50%
// chance of being even
// That means this should terminate within a very small amount of iterations
loop {
match address_payload(self.key + (ProjectivePoint::GENERATOR * offset)) {
Some(address) => {
let script = address.script_pubkey();
if self.scripts.contains_key(&script) {
None?;
}
self.scripts.insert(script, offset);
return Some(offset);
}
None => offset += Scalar::ONE,
}
}
}
/// Scan a transaction.
pub fn scan_transaction(&self, tx: &Transaction) -> Vec<ReceivedOutput> {
let mut res = Vec::new();
for (vout, output) in tx.output.iter().enumerate() {
// If the vout index exceeds 2**32, stop scanning outputs
let Ok(vout) = u32::try_from(vout) else { break };
if let Some(offset) = self.scripts.get(&output.script_pubkey) {
res.push(ReceivedOutput {
offset: *offset,
output: output.clone(),
outpoint: OutPoint::new(tx.txid(), vout),
});
}
}
res
}
/// Scan a block.
///
/// This will also scan the coinbase transaction which is bound by maturity. If received outputs
/// must be immediately spendable, a post-processing pass is needed to remove those outputs.
/// Alternatively, scan_transaction can be called on `block.txdata[1 ..]`.
pub fn scan_block(&self, block: &Block) -> Vec<ReceivedOutput> {
let mut res = Vec::new();
for tx in &block.txdata {
res.extend(self.scan_transaction(tx));
}
res
}
}

View File

@@ -0,0 +1,446 @@
use std_shims::{
io::{self, Read},
collections::HashMap,
};
use thiserror::Error;
use rand_core::{RngCore, CryptoRng};
use transcript::{Transcript, RecommendedTranscript};
use k256::{elliptic_curve::sec1::ToEncodedPoint, Scalar};
use frost::{curve::Secp256k1, Participant, ThresholdKeys, FrostError, sign::*};
use bitcoin::{
hashes::Hash,
sighash::{TapSighashType, SighashCache, Prevouts},
absolute::LockTime,
script::{PushBytesBuf, ScriptBuf},
transaction::{Version, Transaction},
OutPoint, Sequence, Witness, TxIn, Amount, TxOut, Address,
};
use crate::{
crypto::Schnorr,
wallet::{ReceivedOutput, address_payload},
};
#[rustfmt::skip]
// https://github.com/bitcoin/bitcoin/blob/306ccd4927a2efe325c8d84be1bdb79edeb29b04/src/policy/policy.cpp#L26-L63
// As the above notes, a lower amount may not be considered dust if contained in a SegWit output
// This doesn't bother with delineation due to how marginal these values are, and because it isn't
// worth the complexity to implement differentation
pub const DUST: u64 = 546;
#[derive(Clone, PartialEq, Eq, Debug, Error)]
pub enum TransactionError {
#[error("no inputs were specified")]
NoInputs,
#[error("no outputs were created")]
NoOutputs,
#[error("a specified payment's amount was less than bitcoin's required minimum")]
DustPayment,
#[error("too much data was specified")]
TooMuchData,
#[error("fee was too low to pass the default minimum fee rate")]
TooLowFee,
#[error("not enough funds for these payments")]
NotEnoughFunds,
#[error("transaction was too large")]
TooLargeTransaction,
}
/// A signable transaction, clone-able across attempts.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct SignableTransaction {
tx: Transaction,
offsets: Vec<Scalar>,
prevouts: Vec<TxOut>,
needed_fee: u64,
}
impl SignableTransaction {
fn calculate_weight(inputs: usize, payments: &[(Address, u64)], change: Option<&Address>) -> u64 {
// Expand this a full transaction in order to use the bitcoin library's weight function
let mut tx = Transaction {
version: Version(2),
lock_time: LockTime::ZERO,
input: vec![
TxIn {
// This is a fixed size
// See https://developer.bitcoin.org/reference/transactions.html#raw-transaction-format
previous_output: OutPoint::default(),
// This is empty for a Taproot spend
script_sig: ScriptBuf::new(),
// This is fixed size, yet we do use Sequence::MAX
sequence: Sequence::MAX,
// Our witnesses contains a single 64-byte signature
witness: Witness::from_slice(&[vec![0; 64]])
};
inputs
],
output: payments
.iter()
// The payment is a fixed size so we don't have to use it here
// The script pub key is not of a fixed size and does have to be used here
.map(|payment| TxOut {
value: Amount::from_sat(payment.1),
script_pubkey: payment.0.script_pubkey(),
})
.collect(),
};
if let Some(change) = change {
// Use a 0 value since we're currently unsure what the change amount will be, and since
// the value is fixed size (so any value could be used here)
tx.output.push(TxOut { value: Amount::ZERO, script_pubkey: change.script_pubkey() });
}
u64::from(tx.weight())
}
/// Returns the fee necessary for this transaction to achieve the fee rate specified at
/// construction.
///
/// The actual fee this transaction will use is `sum(inputs) - sum(outputs)`.
pub fn needed_fee(&self) -> u64 {
self.needed_fee
}
/// Returns the fee this transaction will use.
pub fn fee(&self) -> u64 {
self.prevouts.iter().map(|prevout| prevout.value.to_sat()).sum::<u64>() -
self.tx.output.iter().map(|prevout| prevout.value.to_sat()).sum::<u64>()
}
/// Create a new SignableTransaction.
///
/// If a change address is specified, any leftover funds will be sent to it if the leftover funds
/// exceed the minimum output amount. If a change address isn't specified, all leftover funds
/// will become part of the paid fee.
///
/// If data is specified, an OP_RETURN output will be added with it.
pub fn new(
mut inputs: Vec<ReceivedOutput>,
payments: &[(Address, u64)],
change: Option<&Address>,
data: Option<Vec<u8>>,
fee_per_weight: u64,
) -> Result<SignableTransaction, TransactionError> {
if inputs.is_empty() {
Err(TransactionError::NoInputs)?;
}
if payments.is_empty() && change.is_none() && data.is_none() {
Err(TransactionError::NoOutputs)?;
}
for (_, amount) in payments {
if *amount < DUST {
Err(TransactionError::DustPayment)?;
}
}
if data.as_ref().map_or(0, Vec::len) > 80 {
Err(TransactionError::TooMuchData)?;
}
let input_sat = inputs.iter().map(|input| input.output.value.to_sat()).sum::<u64>();
let offsets = inputs.iter().map(|input| input.offset).collect();
let tx_ins = inputs
.iter()
.map(|input| TxIn {
previous_output: input.outpoint,
script_sig: ScriptBuf::new(),
sequence: Sequence::MAX,
witness: Witness::new(),
})
.collect::<Vec<_>>();
let payment_sat = payments.iter().map(|payment| payment.1).sum::<u64>();
let mut tx_outs = payments
.iter()
.map(|payment| TxOut {
value: Amount::from_sat(payment.1),
script_pubkey: payment.0.script_pubkey(),
})
.collect::<Vec<_>>();
// Add the OP_RETURN output
if let Some(data) = data {
tx_outs.push(TxOut {
value: Amount::ZERO,
script_pubkey: ScriptBuf::new_op_return(
PushBytesBuf::try_from(data)
.expect("data didn't fit into PushBytes depsite being checked"),
),
})
}
let mut weight = Self::calculate_weight(tx_ins.len(), payments, None);
let mut needed_fee = fee_per_weight * weight;
// "Virtual transaction size" is weight ceildiv 4 per
// https://github.com/bitcoin/bips/blob/master/bip-0141.mediawiki
// https://github.com/bitcoin/bitcoin/blob/306ccd4927a2efe325c8d84be1bdb79edeb29b04/
// src/policy/policy.cpp#L295-L298
// implements this as expected
// Technically, it takes whatever's greater, the weight or the amount of signature operations
// multiplied by DEFAULT_BYTES_PER_SIGOP (20)
// We only use 1 signature per input, and our inputs have a weight exceeding 20
// Accordingly, our inputs' weight will always be greater than the cost of the signature ops
let vsize = weight.div_ceil(4);
debug_assert_eq!(
u64::try_from(bitcoin::policy::get_virtual_tx_size(
weight.try_into().unwrap(),
tx_ins.len().try_into().unwrap()
))
.unwrap(),
vsize
);
// Technically, if there isn't change, this TX may still pay enough of a fee to pass the
// minimum fee. Such edge cases aren't worth programming when they go against intent, as the
// specified fee rate is too low to be valid
// bitcoin::policy::DEFAULT_MIN_RELAY_TX_FEE is in sats/kilo-vbyte
if needed_fee < ((u64::from(bitcoin::policy::DEFAULT_MIN_RELAY_TX_FEE) * vsize) / 1000) {
Err(TransactionError::TooLowFee)?;
}
if input_sat < (payment_sat + needed_fee) {
Err(TransactionError::NotEnoughFunds)?;
}
// If there's a change address, check if there's change to give it
if let Some(change) = change {
let weight_with_change = Self::calculate_weight(tx_ins.len(), payments, Some(change));
let fee_with_change = fee_per_weight * weight_with_change;
if let Some(value) = input_sat.checked_sub(payment_sat + fee_with_change) {
if value >= DUST {
tx_outs
.push(TxOut { value: Amount::from_sat(value), script_pubkey: change.script_pubkey() });
weight = weight_with_change;
needed_fee = fee_with_change;
}
}
}
if tx_outs.is_empty() {
Err(TransactionError::NoOutputs)?;
}
if weight > u64::from(bitcoin::policy::MAX_STANDARD_TX_WEIGHT) {
Err(TransactionError::TooLargeTransaction)?;
}
Ok(SignableTransaction {
tx: Transaction {
version: Version(2),
lock_time: LockTime::ZERO,
input: tx_ins,
output: tx_outs,
},
offsets,
prevouts: inputs.drain(..).map(|input| input.output).collect(),
needed_fee,
})
}
/// Returns the TX ID of the transaction this will create.
pub fn txid(&self) -> [u8; 32] {
let mut res = self.tx.txid().to_byte_array();
res.reverse();
res
}
/// Returns the outputs this transaction will create.
pub fn outputs(&self) -> &[TxOut] {
&self.tx.output
}
/// Create a multisig machine for this transaction.
///
/// Returns None if the wrong keys are used.
pub fn multisig(
self,
keys: &ThresholdKeys<Secp256k1>,
mut transcript: RecommendedTranscript,
) -> Option<TransactionMachine> {
transcript.domain_separate(b"bitcoin_transaction");
transcript.append_message(b"root_key", keys.group_key().to_encoded_point(true).as_bytes());
// Transcript the inputs and outputs
let tx = &self.tx;
for input in &tx.input {
transcript.append_message(b"input_hash", input.previous_output.txid);
transcript.append_message(b"input_output_index", input.previous_output.vout.to_le_bytes());
}
for payment in &tx.output {
transcript.append_message(b"output_script", payment.script_pubkey.as_bytes());
transcript.append_message(b"output_amount", payment.value.to_sat().to_le_bytes());
}
let mut sigs = vec![];
for i in 0 .. tx.input.len() {
let mut transcript = transcript.clone();
// This unwrap is safe since any transaction with this many inputs violates the maximum
// size allowed under standards, which this lib will error on creation of
transcript.append_message(b"signing_input", u32::try_from(i).unwrap().to_le_bytes());
let offset = keys.clone().offset(self.offsets[i]);
if address_payload(offset.group_key())?.script_pubkey() != self.prevouts[i].script_pubkey {
None?;
}
sigs.push(AlgorithmMachine::new(
Schnorr::new(transcript),
keys.clone().offset(self.offsets[i]),
));
}
Some(TransactionMachine { tx: self, sigs })
}
}
/// A FROST signing machine to produce a Bitcoin transaction.
///
/// This does not support caching its preprocess. When sign is called, the message must be empty.
/// This will panic if either `cache` is called or the message isn't empty.
pub struct TransactionMachine {
tx: SignableTransaction,
sigs: Vec<AlgorithmMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl PreprocessMachine for TransactionMachine {
type Preprocess = Vec<Preprocess<Secp256k1, ()>>;
type Signature = Transaction;
type SignMachine = TransactionSignMachine;
fn preprocess<R: RngCore + CryptoRng>(
mut self,
rng: &mut R,
) -> (Self::SignMachine, Self::Preprocess) {
let mut preprocesses = Vec::with_capacity(self.sigs.len());
let sigs = self
.sigs
.drain(..)
.map(|sig| {
let (sig, preprocess) = sig.preprocess(rng);
preprocesses.push(preprocess);
sig
})
.collect();
(TransactionSignMachine { tx: self.tx, sigs }, preprocesses)
}
}
pub struct TransactionSignMachine {
tx: SignableTransaction,
sigs: Vec<AlgorithmSignMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl SignMachine<Transaction> for TransactionSignMachine {
type Params = ();
type Keys = ThresholdKeys<Secp256k1>;
type Preprocess = Vec<Preprocess<Secp256k1, ()>>;
type SignatureShare = Vec<SignatureShare<Secp256k1>>;
type SignatureMachine = TransactionSignatureMachine;
fn cache(self) -> CachedPreprocess {
unimplemented!(
"Bitcoin transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn from_cache(
(): (),
_: ThresholdKeys<Secp256k1>,
_: CachedPreprocess,
) -> (Self, Self::Preprocess) {
unimplemented!(
"Bitcoin transactions don't support caching their preprocesses due to {}",
"being already bound to a specific transaction"
);
}
fn read_preprocess<R: Read>(&self, reader: &mut R) -> io::Result<Self::Preprocess> {
self.sigs.iter().map(|sig| sig.read_preprocess(reader)).collect()
}
fn sign(
mut self,
commitments: HashMap<Participant, Self::Preprocess>,
msg: &[u8],
) -> Result<(TransactionSignatureMachine, Self::SignatureShare), FrostError> {
if !msg.is_empty() {
panic!("message was passed to the TransactionMachine when it generates its own");
}
let commitments = (0 .. self.sigs.len())
.map(|c| {
commitments
.iter()
.map(|(l, commitments)| (*l, commitments[c].clone()))
.collect::<HashMap<_, _>>()
})
.collect::<Vec<_>>();
let mut cache = SighashCache::new(&self.tx.tx);
// Sign committing to all inputs
let prevouts = Prevouts::All(&self.tx.prevouts);
let mut shares = Vec::with_capacity(self.sigs.len());
let sigs = self
.sigs
.drain(..)
.enumerate()
.map(|(i, sig)| {
let (sig, share) = sig.sign(
commitments[i].clone(),
cache
.taproot_key_spend_signature_hash(i, &prevouts, TapSighashType::Default)
// This should never happen since the inputs align with the TX the cache was
// constructed with, and because i is always < prevouts.len()
.expect("taproot_key_spend_signature_hash failed to return a hash")
.as_ref(),
)?;
shares.push(share);
Ok(sig)
})
.collect::<Result<_, _>>()?;
Ok((TransactionSignatureMachine { tx: self.tx.tx, sigs }, shares))
}
}
pub struct TransactionSignatureMachine {
tx: Transaction,
sigs: Vec<AlgorithmSignatureMachine<Secp256k1, Schnorr<RecommendedTranscript>>>,
}
impl SignatureMachine<Transaction> for TransactionSignatureMachine {
type SignatureShare = Vec<SignatureShare<Secp256k1>>;
fn read_share<R: Read>(&self, reader: &mut R) -> io::Result<Self::SignatureShare> {
self.sigs.iter().map(|sig| sig.read_share(reader)).collect()
}
fn complete(
mut self,
mut shares: HashMap<Participant, Self::SignatureShare>,
) -> Result<Transaction, FrostError> {
for (input, schnorr) in self.tx.input.iter_mut().zip(self.sigs.drain(..)) {
let sig = schnorr.complete(
shares.iter_mut().map(|(l, shares)| (*l, shares.remove(0))).collect::<HashMap<_, _>>(),
)?;
let mut witness = Witness::new();
witness.push(sig);
input.witness = witness;
}
Ok(self.tx)
}
}

View File

@@ -0,0 +1,25 @@
use bitcoin_serai::{bitcoin::hashes::Hash as HashTrait, rpc::RpcError};
mod runner;
use runner::rpc;
async_sequential! {
async fn test_rpc() {
let rpc = rpc().await;
// Test get_latest_block_number and get_block_hash by round tripping them
let latest = rpc.get_latest_block_number().await.unwrap();
let hash = rpc.get_block_hash(latest).await.unwrap();
assert_eq!(rpc.get_block_number(&hash).await.unwrap(), latest);
// Test this actually is the latest block number by checking asking for the next block's errors
assert!(matches!(rpc.get_block_hash(latest + 1).await, Err(RpcError::RequestError(_))));
// Test get_block by checking the received block's hash matches the request
let block = rpc.get_block(&hash).await.unwrap();
// Hashes are stored in reverse. It's bs from Satoshi
let mut block_hash = *block.block_hash().as_raw_hash().as_byte_array();
block_hash.reverse();
assert_eq!(hash, block_hash);
}
}

View File

@@ -0,0 +1,48 @@
use std::sync::OnceLock;
use bitcoin_serai::rpc::Rpc;
use tokio::sync::Mutex;
static SEQUENTIAL_CELL: OnceLock<Mutex<()>> = OnceLock::new();
#[allow(non_snake_case)]
pub fn SEQUENTIAL() -> &'static Mutex<()> {
SEQUENTIAL_CELL.get_or_init(|| Mutex::new(()))
}
#[allow(dead_code)]
pub(crate) async fn rpc() -> Rpc {
let rpc = Rpc::new("http://serai:seraidex@127.0.0.1:8332".to_string()).await.unwrap();
// If this node has already been interacted with, clear its chain
if rpc.get_latest_block_number().await.unwrap() > 0 {
rpc
.rpc_call(
"invalidateblock",
serde_json::json!([hex::encode(rpc.get_block_hash(1).await.unwrap())]),
)
.await
.unwrap()
}
rpc
}
#[macro_export]
macro_rules! async_sequential {
($(async fn $name: ident() $body: block)*) => {
$(
#[tokio::test]
async fn $name() {
let guard = runner::SEQUENTIAL().lock().await;
let local = tokio::task::LocalSet::new();
local.run_until(async move {
if let Err(err) = tokio::task::spawn_local(async move { $body }).await {
drop(guard);
Err(err).unwrap()
}
}).await;
}
)*
}
}

View File

@@ -0,0 +1,365 @@
use std::collections::HashMap;
use rand_core::{RngCore, OsRng};
use transcript::{Transcript, RecommendedTranscript};
use k256::{
elliptic_curve::{
group::{ff::Field, Group},
sec1::{Tag, ToEncodedPoint},
},
Scalar, ProjectivePoint,
};
use frost::{
curve::Secp256k1,
Participant, ThresholdKeys,
tests::{THRESHOLD, key_gen, sign_without_caching},
};
use bitcoin_serai::{
bitcoin::{
hashes::Hash as HashTrait,
blockdata::opcodes::all::OP_RETURN,
script::{PushBytesBuf, Instruction, Instructions, Script},
address::NetworkChecked,
OutPoint, Amount, TxOut, Transaction, Network, Address,
},
wallet::{
tweak_keys, address_payload, ReceivedOutput, Scanner, TransactionError, SignableTransaction,
},
rpc::Rpc,
};
mod runner;
use runner::rpc;
const FEE: u64 = 20;
fn is_even(key: ProjectivePoint) -> bool {
key.to_encoded_point(true).tag() == Tag::CompressedEvenY
}
async fn send_and_get_output(rpc: &Rpc, scanner: &Scanner, key: ProjectivePoint) -> ReceivedOutput {
let block_number = rpc.get_latest_block_number().await.unwrap() + 1;
rpc
.rpc_call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([
1,
Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap())
]),
)
.await
.unwrap();
// Mine until maturity
rpc
.rpc_call::<Vec<String>>(
"generatetoaddress",
serde_json::json!([100, Address::p2sh(Script::new(), Network::Regtest).unwrap()]),
)
.await
.unwrap();
let block = rpc.get_block(&rpc.get_block_hash(block_number).await.unwrap()).await.unwrap();
let mut outputs = scanner.scan_block(&block);
assert_eq!(outputs, scanner.scan_transaction(&block.txdata[0]));
assert_eq!(outputs.len(), 1);
assert_eq!(outputs[0].outpoint(), &OutPoint::new(block.txdata[0].txid(), 0));
assert_eq!(outputs[0].value(), block.txdata[0].output[0].value.to_sat());
assert_eq!(
ReceivedOutput::read::<&[u8]>(&mut outputs[0].serialize().as_ref()).unwrap(),
outputs[0]
);
outputs.swap_remove(0)
}
fn keys() -> (HashMap<Participant, ThresholdKeys<Secp256k1>>, ProjectivePoint) {
let mut keys = key_gen(&mut OsRng);
for keys in keys.values_mut() {
*keys = tweak_keys(keys);
}
let key = keys.values().next().unwrap().group_key();
(keys, key)
}
fn sign(
keys: &HashMap<Participant, ThresholdKeys<Secp256k1>>,
tx: &SignableTransaction,
) -> Transaction {
let mut machines = HashMap::new();
for i in (1 ..= THRESHOLD).map(|i| Participant::new(i).unwrap()) {
machines.insert(
i,
tx.clone()
.multisig(&keys[&i].clone(), RecommendedTranscript::new(b"bitcoin-serai Test Transaction"))
.unwrap(),
);
}
sign_without_caching(&mut OsRng, machines, &[])
}
#[test]
fn test_tweak_keys() {
let mut even = false;
let mut odd = false;
// Generate keys until we get an even set and an odd set
while !(even && odd) {
let mut keys = key_gen(&mut OsRng).drain().next().unwrap().1;
if is_even(keys.group_key()) {
// Tweaking should do nothing
assert_eq!(tweak_keys(&keys).group_key(), keys.group_key());
even = true;
} else {
let tweaked = tweak_keys(&keys).group_key();
assert_ne!(tweaked, keys.group_key());
// Tweaking should produce an even key
assert!(is_even(tweaked));
// Verify it uses the smallest possible offset
while keys.group_key().to_encoded_point(true).tag() == Tag::CompressedOddY {
keys = keys.offset(Scalar::ONE);
}
assert_eq!(tweaked, keys.group_key());
odd = true;
}
}
}
async_sequential! {
async fn test_scanner() {
// Test Scanners are creatable for even keys.
for _ in 0 .. 128 {
let key = ProjectivePoint::random(&mut OsRng);
assert_eq!(Scanner::new(key).is_some(), is_even(key));
}
let mut key = ProjectivePoint::random(&mut OsRng);
while !is_even(key) {
key += ProjectivePoint::GENERATOR;
}
{
let mut scanner = Scanner::new(key).unwrap();
for _ in 0 .. 128 {
let mut offset = Scalar::random(&mut OsRng);
let registered = scanner.register_offset(offset).unwrap();
// Registering this again should return None
assert!(scanner.register_offset(offset).is_none());
// We can only register offsets resulting in even keys
// Make this even
while !is_even(key + (ProjectivePoint::GENERATOR * offset)) {
offset += Scalar::ONE;
}
// Ensure it matches the registered offset
assert_eq!(registered, offset);
// Assert registering this again fails
assert!(scanner.register_offset(offset).is_none());
}
}
let rpc = rpc().await;
let mut scanner = Scanner::new(key).unwrap();
assert_eq!(send_and_get_output(&rpc, &scanner, key).await.offset(), Scalar::ZERO);
// Register an offset and test receiving to it
let offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
assert_eq!(
send_and_get_output(&rpc, &scanner, key + (ProjectivePoint::GENERATOR * offset))
.await
.offset(),
offset
);
}
async fn test_transaction_errors() {
let (_, key) = keys();
let rpc = rpc().await;
let scanner = Scanner::new(key).unwrap();
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let inputs = vec![output];
let addr = || Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap());
let payments = vec![(addr(), 1000)];
assert!(SignableTransaction::new(inputs.clone(), &payments, None, None, FEE).is_ok());
assert_eq!(
SignableTransaction::new(vec![], &payments, None, None, FEE),
Err(TransactionError::NoInputs)
);
// No change
assert!(SignableTransaction::new(inputs.clone(), &[(addr(), 1000)], None, None, FEE).is_ok());
// Consolidation TX
assert!(SignableTransaction::new(inputs.clone(), &[], Some(&addr()), None, FEE).is_ok());
// Data
assert!(SignableTransaction::new(inputs.clone(), &[], None, Some(vec![]), FEE).is_ok());
// No outputs
assert_eq!(
SignableTransaction::new(inputs.clone(), &[], None, None, FEE),
Err(TransactionError::NoOutputs),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[(addr(), 1)], None, None, FEE),
Err(TransactionError::DustPayment),
);
assert!(
SignableTransaction::new(inputs.clone(), &payments, None, Some(vec![0; 80]), FEE).is_ok()
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &payments, None, Some(vec![0; 81]), FEE),
Err(TransactionError::TooMuchData),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[], Some(&addr()), None, 0),
Err(TransactionError::TooLowFee),
);
assert_eq!(
SignableTransaction::new(inputs.clone(), &[(addr(), inputs[0].value() * 2)], None, None, FEE),
Err(TransactionError::NotEnoughFunds),
);
assert_eq!(
SignableTransaction::new(inputs, &vec![(addr(), 1000); 10000], None, None, FEE),
Err(TransactionError::TooLargeTransaction),
);
}
async fn test_send() {
let (keys, key) = keys();
let rpc = rpc().await;
let mut scanner = Scanner::new(key).unwrap();
// Get inputs, one not offset and one offset
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
let offset_key = key + (ProjectivePoint::GENERATOR * offset);
let offset_output = send_and_get_output(&rpc, &scanner, offset_key).await;
assert_eq!(offset_output.offset(), offset);
// Declare payments, change, fee
let payments = [
(Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap()), 1005),
(Address::<NetworkChecked>::new(Network::Regtest, address_payload(offset_key).unwrap()), 1007)
];
let change_offset = scanner.register_offset(Scalar::random(&mut OsRng)).unwrap();
let change_key = key + (ProjectivePoint::GENERATOR * change_offset);
let change_addr =
Address::<NetworkChecked>::new(Network::Regtest, address_payload(change_key).unwrap());
// Create and sign the TX
let tx = SignableTransaction::new(
vec![output.clone(), offset_output.clone()],
&payments,
Some(&change_addr),
None,
FEE
).unwrap();
let needed_fee = tx.needed_fee();
let expected_id = tx.txid();
let tx = sign(&keys, &tx);
assert_eq!(tx.output.len(), 3);
// Ensure we can scan it
let outputs = scanner.scan_transaction(&tx);
for (o, output) in outputs.iter().enumerate() {
assert_eq!(output.outpoint(), &OutPoint::new(tx.txid(), u32::try_from(o).unwrap()));
assert_eq!(&ReceivedOutput::read::<&[u8]>(&mut output.serialize().as_ref()).unwrap(), output);
}
assert_eq!(outputs[0].offset(), Scalar::ZERO);
assert_eq!(outputs[1].offset(), offset);
assert_eq!(outputs[2].offset(), change_offset);
// Make sure the payments were properly created
for ((output, scanned), payment) in tx.output.iter().zip(outputs.iter()).zip(payments.iter()) {
assert_eq!(
output,
&TxOut { script_pubkey: payment.0.script_pubkey(), value: Amount::from_sat(payment.1) },
);
assert_eq!(scanned.value(), payment.1 );
}
// Make sure the change is correct
assert_eq!(needed_fee, u64::from(tx.weight()) * FEE);
let input_value = output.value() + offset_output.value();
let output_value = tx.output.iter().map(|output| output.value.to_sat()).sum::<u64>();
assert_eq!(input_value - output_value, needed_fee);
let change_amount =
input_value - payments.iter().map(|payment| payment.1).sum::<u64>() - needed_fee;
assert_eq!(
tx.output[2],
TxOut { script_pubkey: change_addr.script_pubkey(), value: Amount::from_sat(change_amount) },
);
// This also tests send_raw_transaction and get_transaction, which the RPC test can't
// effectively test
rpc.send_raw_transaction(&tx).await.unwrap();
let mut hash = *tx.txid().as_raw_hash().as_byte_array();
hash.reverse();
assert_eq!(tx, rpc.get_transaction(&hash).await.unwrap());
assert_eq!(expected_id, hash);
}
async fn test_data() {
let (keys, key) = keys();
let rpc = rpc().await;
let scanner = Scanner::new(key).unwrap();
let output = send_and_get_output(&rpc, &scanner, key).await;
assert_eq!(output.offset(), Scalar::ZERO);
let data_len = 60 + usize::try_from(OsRng.next_u64() % 21).unwrap();
let mut data = vec![0; data_len];
OsRng.fill_bytes(&mut data);
let tx = sign(
&keys,
&SignableTransaction::new(
vec![output],
&[],
Some(&Address::<NetworkChecked>::new(Network::Regtest, address_payload(key).unwrap())),
Some(data.clone()),
FEE
).unwrap()
);
assert!(tx.output[0].script_pubkey.is_op_return());
let check = |mut instructions: Instructions| {
assert_eq!(instructions.next().unwrap().unwrap(), Instruction::Op(OP_RETURN));
assert_eq!(
instructions.next().unwrap().unwrap(),
Instruction::PushBytes(&PushBytesBuf::try_from(data.clone()).unwrap()),
);
assert!(instructions.next().is_none());
};
check(tx.output[0].script_pubkey.instructions());
check(tx.output[0].script_pubkey.instructions_minimal());
}
}

7
coins/ethereum/.gitignore vendored Normal file
View File

@@ -0,0 +1,7 @@
# Solidity build outputs
cache
artifacts
# Auto-generated ABI files
src/abi/schnorr.rs
src/abi/router.rs

45
coins/ethereum/Cargo.toml Normal file
View File

@@ -0,0 +1,45 @@
[package]
name = "ethereum-serai"
version = "0.1.0"
description = "An Ethereum library supporting Schnorr signing and on-chain verification"
license = "AGPL-3.0-only"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/ethereum"
authors = ["Luke Parker <lukeparker5132@gmail.com>", "Elizabeth Binks <elizabethjbinks@gmail.com>"]
edition = "2021"
publish = false
rust-version = "1.74"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
thiserror = { version = "1", default-features = false }
eyre = { version = "0.6", default-features = false }
sha3 = { version = "0.10", default-features = false, features = ["std"] }
group = { version = "0.13", default-features = false }
k256 = { version = "^0.13.1", default-features = false, features = ["std", "ecdsa"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["secp256k1", "tests"] }
ethers-core = { version = "2", default-features = false }
ethers-providers = { version = "2", default-features = false }
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
[build-dependencies]
ethers-contract = { version = "2", default-features = false, features = ["abigen", "providers"] }
[dev-dependencies]
rand_core = { version = "0.6", default-features = false, features = ["std"] }
hex = { version = "0.4", default-features = false, features = ["std"] }
serde = { version = "1", default-features = false, features = ["std"] }
serde_json = { version = "1", default-features = false, features = ["std"] }
sha2 = { version = "0.10", default-features = false, features = ["std"] }
tokio = { version = "1", features = ["macros"] }

15
coins/ethereum/LICENSE Normal file
View File

@@ -0,0 +1,15 @@
AGPL-3.0-only license
Copyright (c) 2022-2023 Luke Parker
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License Version 3 as
published by the Free Software Foundation.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.

9
coins/ethereum/README.md Normal file
View File

@@ -0,0 +1,9 @@
# Ethereum
This package contains Ethereum-related functionality, specifically deploying and
interacting with Serai contracts.
### Dependencies
- solc
- [Foundry](https://github.com/foundry-rs/foundry)

42
coins/ethereum/build.rs Normal file
View File

@@ -0,0 +1,42 @@
use std::process::Command;
use ethers_contract::Abigen;
fn main() {
println!("cargo:rerun-if-changed=contracts/*");
println!("cargo:rerun-if-changed=artifacts/*");
for line in String::from_utf8(Command::new("solc").args(["--version"]).output().unwrap().stdout)
.unwrap()
.lines()
{
if let Some(version) = line.strip_prefix("Version: ") {
let version = version.split('+').next().unwrap();
assert_eq!(version, "0.8.25");
}
}
#[rustfmt::skip]
let args = [
"--base-path", ".",
"-o", "./artifacts", "--overwrite",
"--bin", "--abi",
"--optimize",
"./contracts/Schnorr.sol", "./contracts/Router.sol",
];
assert!(Command::new("solc").args(args).status().unwrap().success());
Abigen::new("Schnorr", "./artifacts/Schnorr.abi")
.unwrap()
.generate()
.unwrap()
.write_to_file("./src/abi/schnorr.rs")
.unwrap();
Abigen::new("Router", "./artifacts/Router.abi")
.unwrap()
.generate()
.unwrap()
.write_to_file("./src/abi/router.rs")
.unwrap();
}

View File

@@ -0,0 +1,90 @@
// SPDX-License-Identifier: AGPLv3
pragma solidity ^0.8.0;
import "./Schnorr.sol";
contract Router is Schnorr {
// Contract initializer
// TODO: Replace with a MuSig of the genesis validators
address public initializer;
// Nonce is incremented for each batch of transactions executed
uint256 public nonce;
// fixed parity for the public keys used in this contract
uint8 constant public KEY_PARITY = 27;
// current public key's x-coordinate
// note: this key must always use the fixed parity defined above
bytes32 public seraiKey;
struct OutInstruction {
address to;
uint256 value;
bytes data;
}
struct Signature {
bytes32 c;
bytes32 s;
}
// success is a uint256 representing a bitfield of transaction successes
event Executed(uint256 nonce, bytes32 batch, uint256 success);
// error types
error NotInitializer();
error AlreadyInitialized();
error InvalidKey();
error TooManyTransactions();
constructor() {
initializer = msg.sender;
}
// initSeraiKey can be called by the contract initializer to set the first
// public key, only if the public key has yet to be set.
function initSeraiKey(bytes32 _seraiKey) external {
if (msg.sender != initializer) revert NotInitializer();
if (seraiKey != 0) revert AlreadyInitialized();
if (_seraiKey == bytes32(0)) revert InvalidKey();
seraiKey = _seraiKey;
}
// updateSeraiKey validates the given Schnorr signature against the current public key,
// and if successful, updates the contract's public key to the given one.
function updateSeraiKey(
bytes32 _seraiKey,
Signature memory sig
) public {
if (_seraiKey == bytes32(0)) revert InvalidKey();
bytes32 message = keccak256(abi.encodePacked("updateSeraiKey", _seraiKey));
if (!verify(KEY_PARITY, seraiKey, message, sig.c, sig.s)) revert InvalidSignature();
seraiKey = _seraiKey;
}
// execute accepts a list of transactions to execute as well as a Schnorr signature.
// if signature verification passes, the given transactions are executed.
// if signature verification fails, this function will revert.
function execute(
OutInstruction[] calldata transactions,
Signature memory sig
) public {
if (transactions.length > 256) revert TooManyTransactions();
bytes32 message = keccak256(abi.encode("execute", nonce, transactions));
// This prevents re-entrancy from causing double spends yet does allow
// out-of-order execution via re-entrancy
nonce++;
if (!verify(KEY_PARITY, seraiKey, message, sig.c, sig.s)) revert InvalidSignature();
uint256 successes;
for(uint256 i = 0; i < transactions.length; i++) {
(bool success, ) = transactions[i].to.call{value: transactions[i].value, gas: 200_000}(transactions[i].data);
assembly {
successes := or(successes, shl(i, success))
}
}
emit Executed(nonce, message, successes);
}
}

View File

@@ -0,0 +1,39 @@
// SPDX-License-Identifier: AGPLv3
pragma solidity ^0.8.0;
// see https://github.com/noot/schnorr-verify for implementation details
contract Schnorr {
// secp256k1 group order
uint256 constant public Q =
0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141;
error InvalidSOrA();
error InvalidSignature();
// parity := public key y-coord parity (27 or 28)
// px := public key x-coord
// message := 32-byte hash of the message
// c := schnorr signature challenge
// s := schnorr signature
function verify(
uint8 parity,
bytes32 px,
bytes32 message,
bytes32 c,
bytes32 s
) public view returns (bool) {
// ecrecover = (m, v, r, s);
bytes32 sa = bytes32(Q - mulmod(uint256(s), uint256(px), Q));
bytes32 ca = bytes32(Q - mulmod(uint256(c), uint256(px), Q));
if (sa == 0) revert InvalidSOrA();
// the ecrecover precompile implementation checks that the `r` and `s`
// inputs are non-zero (in this case, `px` and `ca`), thus we don't need to
// check if they're zero.
address R = ecrecover(sa, parity, px, ca);
if (R == address(0)) revert InvalidSignature();
return c == keccak256(
abi.encodePacked(R, uint8(parity), px, block.chainid, message)
);
}
}

View File

@@ -0,0 +1,6 @@
#[rustfmt::skip]
#[allow(clippy::all)]
pub(crate) mod schnorr;
#[rustfmt::skip]
#[allow(clippy::all)]
pub(crate) mod router;

View File

@@ -0,0 +1,91 @@
use sha3::{Digest, Keccak256};
use group::ff::PrimeField;
use k256::{
elliptic_curve::{
bigint::ArrayEncoding, ops::Reduce, point::AffineCoordinates, sec1::ToEncodedPoint,
},
ProjectivePoint, Scalar, U256,
};
use frost::{
algorithm::{Hram, SchnorrSignature},
curve::Secp256k1,
};
pub(crate) fn keccak256(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
pub(crate) fn address(point: &ProjectivePoint) -> [u8; 20] {
let encoded_point = point.to_encoded_point(false);
// Last 20 bytes of the hash of the concatenated x and y coordinates
// We obtain the concatenated x and y coordinates via the uncompressed encoding of the point
keccak256(&encoded_point.as_ref()[1 .. 65])[12 ..].try_into().unwrap()
}
#[allow(non_snake_case)]
pub struct PublicKey {
pub A: ProjectivePoint,
pub px: Scalar,
pub parity: u8,
}
impl PublicKey {
#[allow(non_snake_case)]
pub fn new(A: ProjectivePoint) -> Option<PublicKey> {
let affine = A.to_affine();
let parity = u8::from(bool::from(affine.y_is_odd())) + 27;
if parity != 27 {
None?;
}
let x_coord = affine.x();
let x_coord_scalar = <Scalar as Reduce<U256>>::reduce_bytes(&x_coord);
// Return None if a reduction would occur
if x_coord_scalar.to_repr() != x_coord {
None?;
}
Some(PublicKey { A, px: x_coord_scalar, parity })
}
}
#[derive(Clone, Default)]
pub struct EthereumHram {}
impl Hram<Secp256k1> for EthereumHram {
#[allow(non_snake_case)]
fn hram(R: &ProjectivePoint, A: &ProjectivePoint, m: &[u8]) -> Scalar {
let a_encoded_point = A.to_encoded_point(true);
let mut a_encoded = a_encoded_point.as_ref().to_owned();
a_encoded[0] += 25; // Ethereum uses 27/28 for point parity
assert!((a_encoded[0] == 27) || (a_encoded[0] == 28));
let mut data = address(R).to_vec();
data.append(&mut a_encoded);
data.extend(m);
Scalar::reduce(U256::from_be_slice(&keccak256(&data)))
}
}
pub struct Signature {
pub(crate) c: Scalar,
pub(crate) s: Scalar,
}
impl Signature {
pub fn new(
public_key: &PublicKey,
chain_id: U256,
m: &[u8],
signature: SchnorrSignature<Secp256k1>,
) -> Option<Signature> {
let c = EthereumHram::hram(
&signature.R,
&public_key.A,
&[chain_id.to_be_byte_array().as_slice(), &keccak256(m)].concat(),
);
if !signature.verify(public_key.A, c) {
None?;
}
Some(Signature { c, s: signature.s })
}
}

16
coins/ethereum/src/lib.rs Normal file
View File

@@ -0,0 +1,16 @@
use thiserror::Error;
pub mod crypto;
pub(crate) mod abi;
pub mod schnorr;
pub mod router;
#[cfg(test)]
mod tests;
#[derive(Error, Debug)]
pub enum Error {
#[error("failed to verify Schnorr signature")]
InvalidSignature,
}

View File

@@ -0,0 +1,30 @@
pub use crate::abi::router::*;
/*
use crate::crypto::{ProcessedSignature, PublicKey};
use ethers::{contract::ContractFactory, prelude::*, solc::artifacts::contract::ContractBytecode};
use eyre::Result;
use std::{convert::From, fs::File, sync::Arc};
pub async fn router_update_public_key<M: Middleware + 'static>(
contract: &Router<M>,
public_key: &PublicKey,
signature: &ProcessedSignature,
) -> std::result::Result<Option<TransactionReceipt>, eyre::ErrReport> {
let tx = contract.update_public_key(public_key.px.to_bytes().into(), signature.into());
let pending_tx = tx.send().await?;
let receipt = pending_tx.await?;
Ok(receipt)
}
pub async fn router_execute<M: Middleware + 'static>(
contract: &Router<M>,
txs: Vec<Rtransaction>,
signature: &ProcessedSignature,
) -> std::result::Result<Option<TransactionReceipt>, eyre::ErrReport> {
let tx = contract.execute(txs, signature.into()).send();
let pending_tx = tx.send().await?;
let receipt = pending_tx.await?;
Ok(receipt)
}
*/

View File

@@ -0,0 +1,34 @@
use eyre::{eyre, Result};
use group::ff::PrimeField;
use ethers_providers::{Provider, Http};
use crate::{
Error,
crypto::{keccak256, PublicKey, Signature},
};
pub use crate::abi::schnorr::*;
pub async fn call_verify(
contract: &Schnorr<Provider<Http>>,
public_key: &PublicKey,
message: &[u8],
signature: &Signature,
) -> Result<()> {
if contract
.verify(
public_key.parity,
public_key.px.to_repr().into(),
keccak256(message),
signature.c.to_repr().into(),
signature.s.to_repr().into(),
)
.call()
.await?
{
Ok(())
} else {
Err(eyre!(Error::InvalidSignature))
}
}

View File

@@ -0,0 +1,132 @@
use rand_core::OsRng;
use sha2::Sha256;
use sha3::{Digest, Keccak256};
use group::Group;
use k256::{
ecdsa::{hazmat::SignPrimitive, signature::DigestVerifier, SigningKey, VerifyingKey},
elliptic_curve::{bigint::ArrayEncoding, ops::Reduce, point::DecompressPoint},
U256, Scalar, AffinePoint, ProjectivePoint,
};
use frost::{
curve::Secp256k1,
algorithm::{Hram, IetfSchnorr},
tests::{algorithm_machines, sign},
};
use crate::{crypto::*, tests::key_gen};
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar::reduce(U256::from_be_slice(&keccak256(data)))
}
pub(crate) fn ecrecover(message: Scalar, v: u8, r: Scalar, s: Scalar) -> Option<[u8; 20]> {
if r.is_zero().into() || s.is_zero().into() || !((v == 27) || (v == 28)) {
return None;
}
#[allow(non_snake_case)]
let R = AffinePoint::decompress(&r.to_bytes(), (v - 27).into());
#[allow(non_snake_case)]
if let Some(R) = Option::<AffinePoint>::from(R) {
#[allow(non_snake_case)]
let R = ProjectivePoint::from(R);
let r = r.invert().unwrap();
let u1 = ProjectivePoint::GENERATOR * (-message * r);
let u2 = R * (s * r);
let key: ProjectivePoint = u1 + u2;
if !bool::from(key.is_identity()) {
return Some(address(&key));
}
}
None
}
#[test]
fn test_ecrecover() {
let private = SigningKey::random(&mut OsRng);
let public = VerifyingKey::from(&private);
// Sign the signature
const MESSAGE: &[u8] = b"Hello, World!";
let (sig, recovery_id) = private
.as_nonzero_scalar()
.try_sign_prehashed_rfc6979::<Sha256>(&Keccak256::digest(MESSAGE), b"")
.unwrap();
// Sanity check the signature verifies
#[allow(clippy::unit_cmp)] // Intended to assert this wasn't changed to Result<bool>
{
assert_eq!(public.verify_digest(Keccak256::new_with_prefix(MESSAGE), &sig).unwrap(), ());
}
// Perform the ecrecover
assert_eq!(
ecrecover(
hash_to_scalar(MESSAGE),
u8::from(recovery_id.unwrap().is_y_odd()) + 27,
*sig.r(),
*sig.s()
)
.unwrap(),
address(&ProjectivePoint::from(public.as_affine()))
);
}
// Run the sign test with the EthereumHram
#[test]
fn test_signing() {
let (keys, _) = key_gen();
const MESSAGE: &[u8] = b"Hello, World!";
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let _sig =
sign(&mut OsRng, &algo, keys.clone(), algorithm_machines(&mut OsRng, &algo, &keys), MESSAGE);
}
#[allow(non_snake_case)]
pub fn preprocess_signature_for_ecrecover(
R: ProjectivePoint,
public_key: &PublicKey,
chain_id: U256,
m: &[u8],
s: Scalar,
) -> (u8, Scalar, Scalar) {
let c = EthereumHram::hram(
&R,
&public_key.A,
&[chain_id.to_be_byte_array().as_slice(), &keccak256(m)].concat(),
);
let sa = -(s * public_key.px);
let ca = -(c * public_key.px);
(public_key.parity, sa, ca)
}
#[test]
fn test_ecrecover_hack() {
let (keys, public_key) = key_gen();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let chain_id = U256::ONE;
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, &keys),
full_message,
);
let (parity, sa, ca) =
preprocess_signature_for_ecrecover(sig.R, &public_key, chain_id, MESSAGE, sig.s);
let q = ecrecover(sa, parity, public_key.px, ca).unwrap();
assert_eq!(q, address(&sig.R));
}

View File

@@ -0,0 +1,92 @@
use std::{sync::Arc, time::Duration, fs::File, collections::HashMap};
use rand_core::OsRng;
use group::ff::PrimeField;
use k256::{Scalar, ProjectivePoint};
use frost::{curve::Secp256k1, Participant, ThresholdKeys, tests::key_gen as frost_key_gen};
use ethers_core::{
types::{H160, Signature as EthersSignature},
abi::Abi,
};
use ethers_contract::ContractFactory;
use ethers_providers::{Middleware, Provider, Http};
use crate::crypto::PublicKey;
mod crypto;
mod schnorr;
mod router;
pub fn key_gen() -> (HashMap<Participant, ThresholdKeys<Secp256k1>>, PublicKey) {
let mut keys = frost_key_gen::<_, Secp256k1>(&mut OsRng);
let mut group_key = keys[&Participant::new(1).unwrap()].group_key();
let mut offset = Scalar::ZERO;
while PublicKey::new(group_key).is_none() {
offset += Scalar::ONE;
group_key += ProjectivePoint::GENERATOR;
}
for keys in keys.values_mut() {
*keys = keys.offset(offset);
}
let public_key = PublicKey::new(group_key).unwrap();
(keys, public_key)
}
// TODO: Replace with a contract deployment from an unknown account, so the environment solely has
// to fund the deployer, not create/pass a wallet
// TODO: Deterministic deployments across chains
pub async fn deploy_contract(
chain_id: u32,
client: Arc<Provider<Http>>,
wallet: &k256::ecdsa::SigningKey,
name: &str,
) -> eyre::Result<H160> {
let abi: Abi =
serde_json::from_reader(File::open(format!("./artifacts/{name}.abi")).unwrap()).unwrap();
let hex_bin_buf = std::fs::read_to_string(format!("./artifacts/{name}.bin")).unwrap();
let hex_bin =
if let Some(stripped) = hex_bin_buf.strip_prefix("0x") { stripped } else { &hex_bin_buf };
let bin = hex::decode(hex_bin).unwrap();
let factory = ContractFactory::new(abi, bin.into(), client.clone());
let mut deployment_tx = factory.deploy(())?.tx;
deployment_tx.set_chain_id(chain_id);
deployment_tx.set_gas(1_000_000);
let (max_fee_per_gas, max_priority_fee_per_gas) = client.estimate_eip1559_fees(None).await?;
deployment_tx.as_eip1559_mut().unwrap().max_fee_per_gas = Some(max_fee_per_gas);
deployment_tx.as_eip1559_mut().unwrap().max_priority_fee_per_gas = Some(max_priority_fee_per_gas);
let sig_hash = deployment_tx.sighash();
let (sig, rid) = wallet.sign_prehash_recoverable(sig_hash.as_ref()).unwrap();
// EIP-155 v
let mut v = u64::from(rid.to_byte());
assert!((v == 0) || (v == 1));
v += u64::from((chain_id * 2) + 35);
let r = sig.r().to_repr();
let r_ref: &[u8] = r.as_ref();
let s = sig.s().to_repr();
let s_ref: &[u8] = s.as_ref();
let deployment_tx =
deployment_tx.rlp_signed(&EthersSignature { r: r_ref.into(), s: s_ref.into(), v });
let pending_tx = client.send_raw_transaction(deployment_tx).await?;
let mut receipt;
while {
receipt = client.get_transaction_receipt(pending_tx.tx_hash()).await?;
receipt.is_none()
} {
tokio::time::sleep(Duration::from_secs(6)).await;
}
let receipt = receipt.unwrap();
assert!(receipt.status == Some(1.into()));
Ok(receipt.contract_address.unwrap())
}

View File

@@ -0,0 +1,109 @@
use std::{convert::TryFrom, sync::Arc, collections::HashMap};
use rand_core::OsRng;
use group::ff::PrimeField;
use frost::{
curve::Secp256k1,
Participant, ThresholdKeys,
algorithm::IetfSchnorr,
tests::{algorithm_machines, sign},
};
use ethers_core::{
types::{H160, U256, Bytes},
abi::AbiEncode,
utils::{Anvil, AnvilInstance},
};
use ethers_providers::{Middleware, Provider, Http};
use crate::{
crypto::{keccak256, PublicKey, EthereumHram, Signature},
router::{self, *},
tests::{key_gen, deploy_contract},
};
async fn setup_test() -> (
u32,
AnvilInstance,
Router<Provider<Http>>,
HashMap<Participant, ThresholdKeys<Secp256k1>>,
PublicKey,
) {
let anvil = Anvil::new().spawn();
let provider = Provider::<Http>::try_from(anvil.endpoint()).unwrap();
let chain_id = provider.get_chainid().await.unwrap().as_u32();
let wallet = anvil.keys()[0].clone().into();
let client = Arc::new(provider);
let contract_address =
deploy_contract(chain_id, client.clone(), &wallet, "Router").await.unwrap();
let contract = Router::new(contract_address, client.clone());
let (keys, public_key) = key_gen();
// Set the key to the threshold keys
let tx = contract.init_serai_key(public_key.px.to_repr().into()).gas(100_000);
let pending_tx = tx.send().await.unwrap();
let receipt = pending_tx.await.unwrap().unwrap();
assert!(receipt.status == Some(1.into()));
(chain_id, anvil, contract, keys, public_key)
}
#[tokio::test]
async fn test_deploy_contract() {
setup_test().await;
}
pub fn hash_and_sign(
keys: &HashMap<Participant, ThresholdKeys<Secp256k1>>,
public_key: &PublicKey,
chain_id: U256,
message: &[u8],
) -> Signature {
let hashed_message = keccak256(message);
let mut chain_id_bytes = [0; 32];
chain_id.to_big_endian(&mut chain_id_bytes);
let full_message = &[chain_id_bytes.as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, keys),
full_message,
);
Signature::new(public_key, k256::U256::from_words(chain_id.0), message, sig).unwrap()
}
#[tokio::test]
async fn test_router_execute() {
let (chain_id, _anvil, contract, keys, public_key) = setup_test().await;
let to = H160([0u8; 20]);
let value = U256([0u64; 4]);
let data = Bytes::from([0]);
let tx = OutInstruction { to, value, data: data.clone() };
let nonce_call = contract.nonce();
let nonce = nonce_call.call().await.unwrap();
let encoded =
("execute".to_string(), nonce, vec![router::OutInstruction { to, value, data }]).encode();
let sig = hash_and_sign(&keys, &public_key, chain_id.into(), &encoded);
let tx = contract
.execute(vec![tx], router::Signature { c: sig.c.to_repr().into(), s: sig.s.to_repr().into() })
.gas(300_000);
let pending_tx = tx.send().await.unwrap();
let receipt = dbg!(pending_tx.await.unwrap().unwrap());
assert!(receipt.status == Some(1.into()));
println!("gas used: {:?}", receipt.cumulative_gas_used);
println!("logs: {:?}", receipt.logs);
}

View File

@@ -0,0 +1,67 @@
use std::{convert::TryFrom, sync::Arc};
use rand_core::OsRng;
use ::k256::{elliptic_curve::bigint::ArrayEncoding, U256, Scalar};
use ethers_core::utils::{keccak256, Anvil, AnvilInstance};
use ethers_providers::{Middleware, Provider, Http};
use frost::{
curve::Secp256k1,
algorithm::IetfSchnorr,
tests::{algorithm_machines, sign},
};
use crate::{
crypto::*,
schnorr::*,
tests::{key_gen, deploy_contract},
};
async fn setup_test() -> (u32, AnvilInstance, Schnorr<Provider<Http>>) {
let anvil = Anvil::new().spawn();
let provider = Provider::<Http>::try_from(anvil.endpoint()).unwrap();
let chain_id = provider.get_chainid().await.unwrap().as_u32();
let wallet = anvil.keys()[0].clone().into();
let client = Arc::new(provider);
let contract_address =
deploy_contract(chain_id, client.clone(), &wallet, "Schnorr").await.unwrap();
let contract = Schnorr::new(contract_address, client.clone());
(chain_id, anvil, contract)
}
#[tokio::test]
async fn test_deploy_contract() {
setup_test().await;
}
#[tokio::test]
async fn test_ecrecover_hack() {
let (chain_id, _anvil, contract) = setup_test().await;
let chain_id = U256::from(chain_id);
let (keys, public_key) = key_gen();
const MESSAGE: &[u8] = b"Hello, World!";
let hashed_message = keccak256(MESSAGE);
let full_message = &[chain_id.to_be_byte_array().as_slice(), &hashed_message].concat();
let algo = IetfSchnorr::<Secp256k1, EthereumHram>::ietf();
let sig = sign(
&mut OsRng,
&algo,
keys.clone(),
algorithm_machines(&mut OsRng, &algo, &keys),
full_message,
);
let sig = Signature::new(&public_key, chain_id, MESSAGE, sig).unwrap();
call_verify(&contract, &public_key, MESSAGE, &sig).await.unwrap();
// Test an invalid signature fails
let mut sig = sig;
sig.s += Scalar::ONE;
assert!(call_verify(&contract, &public_key, MESSAGE, &sig).await.is_err());
}

View File

@@ -1 +0,0 @@
c/.build

View File

@@ -1,52 +1,113 @@
[package]
name = "monero-serai"
version = "0.1.0"
description = "A modern Monero wallet library"
version = "0.1.4-alpha"
description = "A modern Monero transaction library"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
rust-version = "1.74"
[build-dependencies]
cc = "1.0"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
hex-literal = "0.3"
lazy_static = "1"
thiserror = "1"
std-shims = { path = "../../common/std-shims", version = "^0.1.1", default-features = false }
rand_core = "0.6"
rand_chacha = { version = "0.3", optional = true }
rand = "0.8"
rand_distr = "0.4"
async-trait = { version = "0.1", default-features = false }
thiserror = { version = "1", default-features = false, optional = true }
subtle = "2.4"
zeroize = { version = "^1.5", default-features = false, features = ["zeroize_derive"] }
subtle = { version = "^2.4", default-features = false }
tiny-keccak = { version = "2", features = ["keccak"] }
blake2 = { version = "0.10", optional = true }
rand_core = { version = "0.6", default-features = false }
# Used to send transactions
rand = { version = "0.8", default-features = false }
rand_chacha = { version = "0.3", default-features = false }
# Used to select decoys
rand_distr = { version = "0.4", default-features = false }
curve25519-dalek = { version = "3", features = ["std"] }
sha3 = { version = "0.10", default-features = false }
pbkdf2 = { version = "0.12", features = ["simple"], default-features = false }
group = { version = "0.12" }
dalek-ff-group = { path = "../../crypto/dalek-ff-group" }
curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize", "precomputed-tables"] }
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", features = ["recommended"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["ed25519"], optional = true }
dleq = { package = "dleq-serai", path = "../../crypto/dleq", features = ["serialize"], optional = true }
# Used for the hash to curve, along with the more complicated proofs
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
multiexp = { path = "../../crypto/multiexp", version = "0.4", default-features = false, features = ["batch"] }
hex = "0.4"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
# Needed for multisig
transcript = { package = "flexible-transcript", path = "../../crypto/transcript", version = "0.3", default-features = false, features = ["recommended"], optional = true }
dleq = { path = "../../crypto/dleq", version = "0.4", default-features = false, features = ["serialize"], optional = true }
frost = { package = "modular-frost", path = "../../crypto/frost", version = "0.8", default-features = false, features = ["ed25519"], optional = true }
base58-monero = "1"
monero-epee-bin-serde = "1.0"
monero = "0.16"
monero-generators = { path = "generators", version = "0.4", default-features = false }
reqwest = { version = "0.11", features = ["json"] }
async-lock = { version = "3", default-features = false, optional = true }
[features]
experimental = []
multisig = ["rand_chacha", "blake2", "transcript", "frost", "dleq"]
hex-literal = "0.4"
hex = { version = "0.4", default-features = false, features = ["alloc"] }
serde = { version = "1", default-features = false, features = ["derive", "alloc"] }
serde_json = { version = "1", default-features = false, features = ["alloc"] }
base58-monero = { version = "2", default-features = false, features = ["check"] }
# Used for the provided HTTP RPC
digest_auth = { version = "0.3", default-features = false, optional = true }
simple-request = { path = "../../common/request", version = "0.1", default-features = false, features = ["tls"], optional = true }
tokio = { version = "1", default-features = false, optional = true }
[build-dependencies]
dalek-ff-group = { path = "../../crypto/dalek-ff-group", version = "0.4", default-features = false }
monero-generators = { path = "generators", version = "0.4", default-features = false }
[dev-dependencies]
sha2 = "0.10"
tokio = { version = "1", features = ["full"] }
tokio = { version = "1", features = ["sync", "macros"] }
frost = { package = "modular-frost", path = "../../crypto/frost", features = ["tests"] }
[features]
std = [
"std-shims/std",
"thiserror",
"zeroize/std",
"subtle/std",
"rand_core/std",
"rand/std",
"rand_chacha/std",
"rand_distr/std",
"sha3/std",
"pbkdf2/std",
"multiexp/std",
"transcript/std",
"dleq/std",
"monero-generators/std",
"async-lock?/std",
"hex/std",
"serde/std",
"serde_json/std",
"base58-monero/std",
]
cache-distribution = ["async-lock"]
http-rpc = ["digest_auth", "simple-request", "tokio"]
multisig = ["transcript", "frost", "dleq", "std"]
binaries = ["tokio/rt-multi-thread", "tokio/macros", "http-rpc"]
experimental = []
default = ["std", "http-rpc"]

View File

@@ -1,6 +1,6 @@
MIT License
Copyright (c) 2022 Luke Parker
Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

View File

@@ -4,4 +4,46 @@ A modern Monero transaction library intended for usage in wallets. It prides
itself on accuracy, correctness, and removing common pit falls developers may
face.
Threshold multisignature support is available via the `multisig` feature.
monero-serai also offers the following features:
- Featured Addresses
- A FROST-based multisig orders of magnitude more performant than Monero's
### Purpose and support
monero-serai was written for Serai, a decentralized exchange aiming to support
Monero. Despite this, monero-serai is intended to be a widely usable library,
accurate to Monero. monero-serai guarantees the functionality needed for Serai,
yet will not deprive functionality from other users.
Various legacy transaction formats are not currently implemented, yet we are
willing to add support for them. There aren't active development efforts around
them however.
### Caveats
This library DOES attempt to do the following:
- Create on-chain transactions identical to how wallet2 would (unless told not
to)
- Not be detectable as monero-serai when scanning outputs
- Not reveal spent outputs to the connected RPC node
This library DOES NOT attempt to do the following:
- Have identical RPC behavior when creating transactions
- Be a wallet
This means that monero-serai shouldn't be fingerprintable on-chain. It also
shouldn't be fingerprintable if a targeted attack occurs to detect if the
receiving wallet is monero-serai or wallet2. It also should be generally safe
for usage with remote nodes.
It won't hide from remote nodes it's monero-serai however, potentially
allowing a remote node to profile you. The implications of this are left to the
user to consider.
It also won't act as a wallet, just as a transaction library. wallet2 has
several *non-transaction-level* policies, such as always attempting to use two
inputs to create transactions. These are considered out of scope to
monero-serai.

View File

@@ -1,72 +1,67 @@
use std::{env, path::Path, process::Command};
use std::{
io::Write,
env,
path::Path,
fs::{File, remove_file},
};
use dalek_ff_group::EdwardsPoint;
use monero_generators::bulletproofs_generators;
fn serialize(generators_string: &mut String, points: &[EdwardsPoint]) {
for generator in points {
generators_string.extend(
format!(
"
dalek_ff_group::EdwardsPoint(
curve25519_dalek::edwards::CompressedEdwardsY({:?}).decompress().unwrap()
),
",
generator.compress().to_bytes()
)
.chars(),
);
}
}
fn generators(prefix: &'static str, path: &str) {
let generators = bulletproofs_generators(prefix.as_bytes());
#[allow(non_snake_case)]
let mut G_str = String::new();
serialize(&mut G_str, &generators.G);
#[allow(non_snake_case)]
let mut H_str = String::new();
serialize(&mut H_str, &generators.H);
let path = Path::new(&env::var("OUT_DIR").unwrap()).join(path);
let _ = remove_file(&path);
File::create(&path)
.unwrap()
.write_all(
format!(
"
pub(crate) static GENERATORS_CELL: OnceLock<Generators> = OnceLock::new();
pub fn GENERATORS() -> &'static Generators {{
GENERATORS_CELL.get_or_init(|| Generators {{
G: vec![
{G_str}
],
H: vec![
{H_str}
],
}})
}}
",
)
.as_bytes(),
)
.unwrap();
}
fn main() {
if !Command::new("git").args(&["submodule", "update", "--init", "--recursive"]).status().unwrap().success() {
panic!("git failed to init submodules");
}
println!("cargo:rerun-if-changed=build.rs");
if !Command ::new("mkdir").args(&["-p", ".build"])
.current_dir(&Path::new("c")).status().unwrap().success() {
panic!("failed to create a directory to track build progress");
}
let out_dir = &env::var("OUT_DIR").unwrap();
// Use a file to signal if Monero was already built, as that should never be rebuilt
// If the signaling file was deleted, run this script again to rebuild Monero though
println!("cargo:rerun-if-changed=c/.build/monero");
if !Path::new("c/.build/monero").exists() {
if !Command::new("make").arg(format!("-j{}", &env::var("THREADS").unwrap_or("2".to_string())))
.current_dir(&Path::new("c/monero")).status().unwrap().success() {
panic!("make failed to build Monero. Please check your dependencies");
}
if !Command::new("touch").arg("monero")
.current_dir(&Path::new("c/.build")).status().unwrap().success() {
panic!("failed to create a file to label Monero as built");
}
}
println!("cargo:rerun-if-changed=c/wrapper.cpp");
cc::Build::new()
.static_flag(true)
.warnings(false)
.extra_warnings(false)
.flag("-Wno-deprecated-declarations")
.include("c/monero/external/supercop/include")
.include("c/monero/contrib/epee/include")
.include("c/monero/src")
.include("c/monero/build/release/generated_include")
.define("AUTO_INITIALIZE_EASYLOGGINGPP", None)
.include("c/monero/external/easylogging++")
.file("c/monero/external/easylogging++/easylogging++.cc")
.file("c/monero/src/common/aligned.c")
.file("c/monero/src/common/perf_timer.cpp")
.include("c/monero/src/crypto")
.file("c/monero/src/crypto/crypto-ops-data.c")
.file("c/monero/src/crypto/crypto-ops.c")
.file("c/monero/src/crypto/keccak.c")
.file("c/monero/src/crypto/hash.c")
.include("c/monero/src/device")
.file("c/monero/src/device/device_default.cpp")
.include("c/monero/src/ringct")
.file("c/monero/src/ringct/rctCryptoOps.c")
.file("c/monero/src/ringct/rctTypes.cpp")
.file("c/monero/src/ringct/rctOps.cpp")
.file("c/monero/src/ringct/multiexp.cc")
.file("c/monero/src/ringct/bulletproofs.cc")
.file("c/monero/src/ringct/rctSigs.cpp")
.file("c/wrapper.cpp")
.compile("wrapper");
println!("cargo:rustc-link-search={}", out_dir);
println!("cargo:rustc-link-lib=wrapper");
println!("cargo:rustc-link-lib=stdc++");
generators("bulletproof", "generators.rs");
generators("bulletproof_plus", "generators_plus.rs");
}

View File

@@ -1,158 +0,0 @@
#include <mutex>
#include "device/device_default.hpp"
#include "ringct/bulletproofs.h"
#include "ringct/rctSigs.h"
typedef std::lock_guard<std::mutex> lock;
std::mutex rng_mutex;
uint8_t rng_entropy[64];
extern "C" {
void rng(uint8_t* seed) {
// Set the first half to the seed
memcpy(rng_entropy, seed, 32);
// Set the second half to the hash of a DST to ensure a lack of collisions
crypto::cn_fast_hash("RNG_entropy_seed", 16, (char*) &rng_entropy[32]);
}
}
extern "C" void monero_wide_reduce(uint8_t* value);
namespace crypto {
void generate_random_bytes_not_thread_safe(size_t n, void* value) {
size_t written = 0;
while (written != n) {
uint8_t hash[32];
crypto::cn_fast_hash(rng_entropy, 64, (char*) hash);
// Step the RNG by setting the latter half to the most recent result
// Does not leak the RNG, even if the values are leaked (which they are
// expected to be) due to the first half remaining constant and
// undisclosed
memcpy(&rng_entropy[32], hash, 32);
size_t next = n - written;
if (next > 32) {
next = 32;
}
memcpy(&((uint8_t*) value)[written], hash, next);
written += next;
}
}
void random32_unbiased(unsigned char *bytes) {
uint8_t value[64];
generate_random_bytes_not_thread_safe(64, value);
monero_wide_reduce(value);
memcpy(bytes, value, 32);
}
}
extern "C" {
void c_hash_to_point(uint8_t* point) {
rct::key key_point;
ge_p3 e_p3;
memcpy(key_point.bytes, point, 32);
rct::hash_to_p3(e_p3, key_point);
ge_p3_tobytes(point, &e_p3);
}
uint8_t* c_generate_bp(uint8_t* seed, uint8_t len, uint64_t* a, uint8_t* m) {
lock guard(rng_mutex);
rng(seed);
rct::keyV masks;
std::vector<uint64_t> amounts;
masks.resize(len);
amounts.resize(len);
for (uint8_t i = 0; i < len; i++) {
memcpy(masks[i].bytes, m + (i * 32), 32);
amounts[i] = a[i];
}
rct::Bulletproof bp = rct::bulletproof_PROVE(amounts, masks);
std::stringstream ss;
binary_archive<true> ba(ss);
::serialization::serialize(ba, bp);
uint8_t* res = (uint8_t*) calloc(ss.str().size(), 1);
memcpy(res, ss.str().data(), ss.str().size());
return res;
}
bool c_verify_bp(
uint8_t* seed,
uint s_len,
uint8_t* s,
uint8_t c_len,
uint8_t* c
) {
// BPs are batch verified which use RNG based weights to ensure individual
// integrity
// That's why this must also have control over RNG, to prevent interrupting
// multisig signing while not using known seeds. Considering this doesn't
// actually define a batch, and it's only verifying a single BP,
// it'd probably be fine, but...
lock guard(rng_mutex);
rng(seed);
rct::Bulletproof bp;
std::stringstream ss;
std::string str;
str.assign((char*) s, (size_t) s_len);
ss << str;
binary_archive<false> ba(ss);
::serialization::serialize(ba, bp);
if (!ss.good()) {
return false;
}
bp.V.resize(c_len);
for (uint8_t i = 0; i < c_len; i++) {
memcpy(bp.V[i].bytes, &c[i * 32], 32);
}
try { return rct::bulletproof_VERIFY(bp); } catch(...) { return false; }
}
bool c_verify_clsag(
uint s_len,
uint8_t* s,
uint8_t k_len,
uint8_t* k,
uint8_t* I,
uint8_t* p,
uint8_t* m
) {
rct::clsag clsag;
std::stringstream ss;
std::string str;
str.assign((char*) s, (size_t) s_len);
ss << str;
binary_archive<false> ba(ss);
::serialization::serialize(ba, clsag);
if (!ss.good()) {
return false;
}
rct::ctkeyV keys;
keys.resize(k_len);
for (uint8_t i = 0; i < k_len; i++) {
memcpy(keys[i].dest.bytes, &k[(i * 2) * 32], 32);
memcpy(keys[i].mask.bytes, &k[((i * 2) + 1) * 32], 32);
}
memcpy(clsag.I.bytes, I, 32);
rct::key pseudo_out;
memcpy(pseudo_out.bytes, p, 32);
rct::key msg;
memcpy(msg.bytes, m, 32);
try {
return verRctCLSAGSimple(msg, clsag, keys, pseudo_out);
} catch(...) { return false; }
}
}

View File

@@ -0,0 +1,34 @@
[package]
name = "monero-generators"
version = "0.4.0"
description = "Monero's hash_to_point and generators"
license = "MIT"
repository = "https://github.com/serai-dex/serai/tree/develop/coins/monero/generators"
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
edition = "2021"
[package.metadata.docs.rs]
all-features = true
rustdoc-args = ["--cfg", "docsrs"]
[lints]
workspace = true
[dependencies]
std-shims = { path = "../../../common/std-shims", version = "^0.1.1", default-features = false }
subtle = { version = "^2.4", default-features = false }
sha3 = { version = "0.10", default-features = false }
curve25519-dalek = { version = "4", default-features = false, features = ["alloc", "zeroize", "precomputed-tables"] }
group = { version = "0.13", default-features = false }
dalek-ff-group = { path = "../../../crypto/dalek-ff-group", version = "0.4", default-features = false }
[dev-dependencies]
hex = "0.4"
[features]
std = ["std-shims/std", "subtle/std", "sha3/std", "dalek-ff-group/std"]
default = ["std"]

View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022-2023 Luke Parker
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -0,0 +1,7 @@
# Monero Generators
Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
`hash_to_point` here, is included, as needed to generate generators.
This library is usable under no-std when the `std` feature is disabled.

View File

@@ -0,0 +1,60 @@
use subtle::ConditionallySelectable;
use curve25519_dalek::edwards::{EdwardsPoint, CompressedEdwardsY};
use group::ff::{Field, PrimeField};
use dalek_ff_group::FieldElement;
use crate::hash;
/// Decompress canonically encoded ed25519 point
/// It does not check if the point is in the prime order subgroup
pub fn decompress_point(bytes: [u8; 32]) -> Option<EdwardsPoint> {
CompressedEdwardsY(bytes)
.decompress()
// Ban points which are either unreduced or -0
.filter(|point| point.compress().to_bytes() == bytes)
}
/// Monero's hash to point function, as named `hash_to_ec`.
pub fn hash_to_point(bytes: [u8; 32]) -> EdwardsPoint {
#[allow(non_snake_case)]
let A = FieldElement::from(486662u64);
let v = FieldElement::from_square(hash(&bytes)).double();
let w = v + FieldElement::ONE;
let x = w.square() + (-A.square() * v);
// This isn't the complete X, yet its initial value
// We don't calculate the full X, and instead solely calculate Y, letting dalek reconstruct X
// While inefficient, it solves API boundaries and reduces the amount of work done here
#[allow(non_snake_case)]
let X = {
let u = w;
let v = x;
let v3 = v * v * v;
let uv3 = u * v3;
let v7 = v3 * v3 * v;
let uv7 = u * v7;
uv3 * uv7.pow((-FieldElement::from(5u8)) * FieldElement::from(8u8).invert().unwrap())
};
let x = X.square() * x;
let y = w - x;
let non_zero_0 = !y.is_zero();
let y_if_non_zero_0 = w + x;
let sign = non_zero_0 & (!y_if_non_zero_0.is_zero());
let mut z = -A;
z *= FieldElement::conditional_select(&v, &FieldElement::from(1u8), sign);
#[allow(non_snake_case)]
let Z = z + w;
#[allow(non_snake_case)]
let mut Y = z - w;
Y *= Z.invert().unwrap();
let mut bytes = Y.to_repr();
bytes[31] |= sign.unwrap_u8() << 7;
decompress_point(bytes).unwrap().mul_by_cofactor()
}

View File

@@ -0,0 +1,79 @@
//! Generators used by Monero in both its Pedersen commitments and Bulletproofs(+).
//!
//! An implementation of Monero's `ge_fromfe_frombytes_vartime`, simply called
//! `hash_to_point` here, is included, as needed to generate generators.
#![cfg_attr(not(feature = "std"), no_std)]
use std_shims::{sync::OnceLock, vec::Vec};
use sha3::{Digest, Keccak256};
use curve25519_dalek::edwards::{EdwardsPoint as DalekPoint};
use group::{Group, GroupEncoding};
use dalek_ff_group::EdwardsPoint;
mod varint;
use varint::write_varint;
mod hash_to_point;
pub use hash_to_point::{hash_to_point, decompress_point};
#[cfg(test)]
mod tests;
fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
static H_CELL: OnceLock<DalekPoint> = OnceLock::new();
/// Monero's alternate generator `H`, used for amounts in Pedersen commitments.
#[allow(non_snake_case)]
pub fn H() -> DalekPoint {
*H_CELL.get_or_init(|| {
decompress_point(hash(&EdwardsPoint::generator().to_bytes())).unwrap().mul_by_cofactor()
})
}
static H_POW_2_CELL: OnceLock<[DalekPoint; 64]> = OnceLock::new();
/// Monero's alternate generator `H`, multiplied by 2**i for i in 1 ..= 64.
#[allow(non_snake_case)]
pub fn H_pow_2() -> &'static [DalekPoint; 64] {
H_POW_2_CELL.get_or_init(|| {
let mut res = [H(); 64];
for i in 1 .. 64 {
res[i] = res[i - 1] + res[i - 1];
}
res
})
}
const MAX_M: usize = 16;
const N: usize = 64;
const MAX_MN: usize = MAX_M * N;
/// Container struct for Bulletproofs(+) generators.
#[allow(non_snake_case)]
pub struct Generators {
pub G: Vec<EdwardsPoint>,
pub H: Vec<EdwardsPoint>,
}
/// Generate generators as needed for Bulletproofs(+), as Monero does.
pub fn bulletproofs_generators(dst: &'static [u8]) -> Generators {
let mut res = Generators { G: Vec::with_capacity(MAX_MN), H: Vec::with_capacity(MAX_MN) };
for i in 0 .. MAX_MN {
let i = 2 * i;
let mut even = H().compress().to_bytes().to_vec();
even.extend(dst);
let mut odd = even.clone();
write_varint(&i.try_into().unwrap(), &mut even).unwrap();
write_varint(&(i + 1).try_into().unwrap(), &mut odd).unwrap();
res.H.push(EdwardsPoint(hash_to_point(hash(&even))));
res.G.push(EdwardsPoint(hash_to_point(hash(&odd))));
}
res
}

View File

@@ -0,0 +1,38 @@
use crate::{decompress_point, hash_to_point};
#[test]
fn crypto_tests() {
// tests.txt file copied from monero repo
// https://github.com/monero-project/monero/
// blob/ac02af92867590ca80b2779a7bbeafa99ff94dcb/tests/crypto/tests.txt
let reader = include_str!("./tests.txt");
for line in reader.lines() {
let mut words = line.split_whitespace();
let command = words.next().unwrap();
match command {
"check_key" => {
let key = words.next().unwrap();
let expected = match words.next().unwrap() {
"true" => true,
"false" => false,
_ => unreachable!("invalid result"),
};
let actual = decompress_point(hex::decode(key).unwrap().try_into().unwrap());
assert_eq!(actual.is_some(), expected);
}
"hash_to_ec" => {
let bytes = words.next().unwrap();
let expected = words.next().unwrap();
let actual = hash_to_point(hex::decode(bytes).unwrap().try_into().unwrap());
assert_eq!(hex::encode(actual.compress().to_bytes()), expected);
}
_ => unreachable!("unknown command"),
}
}
}

View File

@@ -0,0 +1 @@
mod hash_to_point;

View File

@@ -0,0 +1,628 @@
check_key c2cb3cf3840aa9893e00ec77093d3d44dba7da840b51c48462072d58d8efd183 false
check_key bd85a61bae0c101d826cbed54b1290f941d26e70607a07fc6f0ad611eb8f70a6 true
check_key 328f81cad4eba24ab2bad7c0e56b1e2e7346e625bcb06ae649aef3ffa0b8bef3 false
check_key 6016a5463b9e5a58c3410d3f892b76278883473c3f0b69459172d3de49e85abe true
check_key 4c71282b2add07cdc6898a2622553f1ca4eb851e5cb121181628be5f3814c5b1 false
check_key 69393c25c3b50e177f81f20f852dd604e768eb30052e23108b3cfa1a73f2736e true
check_key 3d5a89b676cb84c2be3428d20a660dc6a37cae13912e127888a5132e8bac2163 true
check_key 78cd665deb28cebc6208f307734c56fccdf5fa7e2933fadfcdd2b6246e9ae95c false
check_key e03b2414e260580f86ee294cd4c636a5b153e617f704e81dad248fbf715b2ee4 true
check_key 28c3503ce82d7cdc8e0d96c4553bcf0352bbcfc73925495dbe541e7e1df105fc false
check_key 06855c3c3e0d03fec354059bda319b39916bdc10b6581e3f41b335ee7b014fd5 false
check_key 556381485df0d7d5a268ab5ecfb2984b060acc63471183fcf538bf273b0c0cb5 true
check_key c7f76d82ac64b1e7fdc32761ff00d6f0f7ada4cf223aa5a11187e3a02e1d5319 true
check_key cfa85d8bdb6f633fcf031adee3a299ac42eeb6bd707744049f652f6322f5aa47 true
check_key 91e9b63ced2b08979fee713365464cc3417c4f238f9bdd3396efbb3c58e195ee true
check_key 7b56e76fe94bd30b3b2f2c4ba5fe4c504821753a8965eb1cbcf8896e2d6aba19 true
check_key 7338df494bc416cf5edcc02069e067f39cb269ce67bd9faba956021ce3b3de3a false
check_key f9a1f27b1618342a558379f4815fa5039a8fe9d98a09f45c1af857ba99231dc1 false
check_key b2a1f37718180d4448a7fcb5f788048b1a7132dde1cfd25f0b9b01776a21c687 true
check_key 0d3a0f9443a8b24510ad1e76a8117cca03bce416edfe35e3c2a2c2712454f8dc false
check_key d8d3d806a76f120c4027dc9c9d741ad32e06861b9cfbc4ce39289c04e251bb3c false
check_key 1e9e3ba7bc536cd113606842835d1f05b4b9e65875742f3a35bfb2d63164b5d5 true
check_key 5c52d0087997a2cdf1d01ed0560d94b4bfd328cb741cb9a8d46ff50374b35a57 true
check_key bb669d4d7ffc4b91a14defedcdbd96b330108b01adc63aa685e2165284c0033b false
check_key d2709ae751a0a6fd796c98456fa95a7b64b75a3434f1caa3496eeaf5c14109b4 true
check_key e0c238cba781684e655b10a7d4af04ab7ff2e7022182d7ed2279d6adf36b3e7a false
check_key 34ebb4bf871572cee5c6935716fab8c8ec28feef4f039763d8f039b84a50bf4c false
check_key 4730d4f38ec3f3b83e32e6335d2506df4ee39858848842c5a0184417fcc639e4 true
check_key d42cf7fdf5e17e0a8a7f88505a2b7a3d297113bd93d3c20fa87e11509ec905a2 true
check_key b757c95059cefabb0080d3a8ebca82e46efecfd29881be3121857f9d915e388c false
check_key bbe777aaf04d02b96c0632f4b1c6f35f1c7bcbc5f22af192f92c077709a2b50b false
check_key 73518522aabd28566f858c33fccb34b7a4de0e283f6f783f625604ee647afad9 true
check_key f230622c4a8f6e516590466bd10f86b64fbef61695f6a054d37604e0b024d5af false
check_key bc6b9a8379fd6c369f7c3bd9ddce58db6b78f27a41d798bb865c3920824d0943 false
check_key 45a4f87c25898cd6be105fa1602b85c4d862782adaac8b85c996c4a2bcd8af47 true
check_key eb4ad3561d21c4311affbd7cc2c7ff5fd509f72f88ba67dc097a75c31fdbd990 false
check_key 2f34f4630c09a23b7ecc19f02b4190a26df69e07e13de8069ae5ff80d23762fc true
check_key 2ea4e4fb5085eb5c8adee0d5ab7d35c67d74d343bd816cd13924536cffc2527c true
check_key 5d35467ee6705a0d35818aa9ae94e4603c3e5500bfc4cf4c4f77a7160a597aa6 true
check_key 8ff42bc76796e20c99b6e879369bd4b46a256db1366416291de9166e39d5a093 true
check_key 0262ba718850df6c621e8a24cd9e4831c047e38818a89e15c7a06a489a4558e1 false
check_key 58b29b2ba238b534b08fb46f05f430e61cb77dc251b0bb50afec1b6061fd9247 false
check_key 153170e3dc2b0e1b368fc0d0e31053e872f094cdace9a2846367f0d9245a109b false
check_key 40419d309d07522d493bb047ca9b5fb6c401aae226eefae6fd395f5bb9114200 true
check_key 713068818d256ef69c78cd6082492013fbd48de3c9e7e076415dd0a692994504 true
check_key a7218ee08e50781b0c87312d5e0031467e863c10081668e3792d96cbcee4e474 true
check_key 356ce516b00e674ef1729c75b0a68090e7265cef675bbf32bf809495b67e9342 false
check_key 52a5c053293675e3efd2c585047002ea6d77931cbf38f541b9070d319dc0d237 false
check_key 77c0080bf157e069b18c4c604cc9505c5ec6f0f9930e087592d70507ca1b5534 false
check_key e733bc41f880a4cfb1ca6f397916504130807289cacfca10b15f5b8d058ed1bf false
check_key c4f1d3c884908a574ecea8be10e02277de35ef84a1d10f105f2be996f285161f true
check_key aed677f7f69e146aa0863606ac580fc0bbdc22a88c4b4386abaa4bdfff66bcc9 false
check_key 6ad0edf59769599af8caa986f502afc67aecbebb8107aaf5e7d3ae51d5cf8dd8 false
check_key 64a0a70e99be1f775c222ee9cd6f1bee6f632cb9417899af398ff9aff70661c6 true
check_key c63afaa03bb5c4ed7bc77aac175dbfb73f904440b2e3056a65850ac1bd261332 false
check_key a4e89cd2471c26951513b1cfbdcf053a86575e095af52495276aa56ede8ce344 false
check_key 2ce935d97f7c3ddb973de685d20f58ee39938fe557216328045ec2b83f3132be true
check_key 3e3d38b1fca93c1559ac030d586616354c668aa76245a09e3fa6de55ac730973 true
check_key 8b81b9681f76a4254007fd07ed1ded25fc675973ccb23afd06074805194733a4 false
check_key 26d1c15dfc371489439e29bcef2afcf7ed01fac24960fdc2e7c20847a8067588 true
check_key 85c1199b5a4591fc4cc36d23660648c1b9cfbb0e9c47199fa3eea33299a3dcec false
check_key 60830ba5449c1f04ac54675dfc7cac7510106c4b7549852551f8fe65971123e2 false
check_key 3e43c28c024597b3b836e4bc16905047cbf6e841b80e0b8cd6a325049070c2a5 false
check_key 474792c16a0032343a6f28f4cb564747c3b1ea0b6a6b9a42f7c71d7cc3dd3b44 true
check_key c8ec5e67cb5786673085191881950a3ca20dde88f46851b01dd91c695cfbad16 true
check_key 861c4b24b24a87b8559e0bb665f84dcc506c147a909f335ae4573b92299f042f false
check_key 2c9e0fe3e4983d79f86c8c36928528f1bc90d94352ce427032cdef6906d84d0b true
check_key 9293742822c2dff63fdc1bf6645c864fd527cea2ddba6d4f3048d202fc340c9a true
check_key 3956422ad380ef19cb9fe360ef09cc7aaec7163eea4114392a7a0b2e2671914e true
check_key 5ae8e72cadda85e525922fec11bd53a261cf26ee230fe85a1187f831b1b2c258 false
check_key 973feca43a0baf450c30ace5dc19015e19400f0898316e28d9f3c631da31f99a true
check_key dd946c91a2077f45c5c16939e53859d9beabaf065e7b1b993d5e5cd385f8716e true
check_key b3928f2d67e47f6bd6da81f72e64908d8ff391af5689f0202c4c6fec7666ffe8 true
check_key 313382e82083697d7f9d256c3b3800b099b56c3ef33cacdccbd40a65622e25fc false
check_key 7d65380c12144802d39ed9306eed79fe165854273700437c0b4b50559800c058 true
check_key 4db5c20a49422fd27739c9ca80e2271a8a125dfcead22cb8f035d0e1b7b163be true
check_key dd76a9f565ef0e44d1531349ec4c5f7c3c387c2f5823e693b4952f4b0b70808c true
check_key 66430bf628eae23918c3ed17b42138db1f98c24819e55fc4a07452d0c85603eb true
check_key 9f0b677830c3f089c27daf724bb10be848537f8285de83ab0292d35afb617f77 false
check_key cbf98287391fb00b1e68ad64e9fb10198025864c099b8b9334d840457e673874 true
check_key a42552e9446e49a83aed9e3370506671216b2d1471392293b8fc2b81c81a73ee false
check_key fb3de55ac81a923d506a514602d65d004ec9d13e8b47e82d73af06da73006673 false
check_key e17abb78e58a4b72ff4ad7387b290f2811be880b394b8bcaae7748ac09930169 false
check_key 9ffbda7ace69753761cdb5eb01f75433efa5cdb6a4f1b664874182c6a95adcba true
check_key 507123c979179ea0a3f7f67fb485f71c8636ec4ec70aa47b92f3c707e7541a54 false
check_key f1d0b156571994ef578c61cb6545d34f834eb30e4357539a5633c862d4dffa91 false
check_key 3de62311ec14f9ee95828c190b2dc3f03059d6119e8dfccb7323efc640e07c75 false
check_key 5e50bb48bc9f6dd11d52c1f0d10d8ae5674d7a4af89cbbce178dafc8a562e5fe false
check_key 20b2c16497be101995391ceefb979814b0ea76f1ed5b6987985bcdcd17b36a81 false
check_key d63bff73b914ce791c840e99bfae0d47afdb99c2375e33c8f149d0df03d97873 false
check_key 3f24b3d94b5ddd244e4c4e67a6d9f533f0396ca30454aa0ca799f21328b81d47 true
check_key 6a44c016f09225a6d2e830290719d33eb29b53b553eea7737ed3a6e297b2e7d2 true
check_key ff0f34df0c76c207b8340be2009db72f730c69c2bbfeea2013105eaccf1d1f8e true
check_key 4baf559869fe4e915e219c3c8d9a2330fc91e542a5a2a7311d4d59fee996f807 true
check_key 1632207dfef26e97d13b0d0035ea9468fc5a8a89b0990fce77bb143c9d7f3b67 true
check_key fcb3dee3993d1a47630f29410903dd03706bd5e81c5802e6f1b9095cbdb404d3 true
check_key fb527092b9809e3d27d7588c7ef89915a769b99c1e03e7f72bbead9ed837daae false
check_key 902b118d27d40ab9cbd55edd375801ce302cdb59e09c8659a3ea1401918d8bba false
check_key 4d6fbf25ca51e263a700f1abf84f758dde3d11b632e908b3093d64fe2e70ea0a true
check_key f4c3211ec70affc1c9a94a6589460ee8360dad5f8c679152f16994038532e3fc true
check_key c2b3d73ac14956d7fdf12fa92235af1bb09e1566a6a6ffd0025682c750abdd69 false
check_key b7e68c12207d2e2104fb2ca224829b6fccc1c0e2154e8a931e3c837a945f4430 false
check_key 56ca0ca227708f1099bda1463db9559541c8c11ffad7b3d95c717471f25a01bf true
check_key 3eef3a46833e4d851671182a682e344e36bea7211a001f3b8af1093a9c83f1b2 true
check_key bd1f4a4f26cab7c1cbc0e17049b90854d6d28d2d55181e1b5f7a8045fcdfa06e true
check_key 8537b01c87e7c184d9555e8d93363dcd9b60a8acc94cd3e41eb7525fd3e1d35a false
check_key 68ace49179d549bad391d98ab2cc8afee65f98ce14955c3c1b16e850fabec231 true
check_key f9922f8a660e7c3e4f3735a817d18b72f59166a0be2d99795f953cf233a27e24 true
check_key 036b6be3da26e80508d5a5a6a5999a1fe0db1ac4e9ade8f1ea2eaf2ea9b1a70e true
check_key 5e595e886ce16b5ea31f53bcb619f16c8437276618c595739fece6339731feb0 false
check_key 4ee2cebae3476ed2eeb7efef9d20958538b3642f938403302682a04115c0f8ed false
check_key 519eedbd0da8676063ce7d5a605b3fc27afeecded857afa24b894ad248c87b5d false
check_key ce2b627c0accf4a3105796680c37792b30c6337d2d4fea11678282455ff82ff7 false
check_key aa26ed99071a8416215e8e7ded784aa7c2b303aab67e66f7539905d7e922eb4d false
check_key 435ae49c9ca26758aa103bdcca8d51393b1906fe27a61c5245361e554f335ec2 true
check_key 42568af395bd30024f6ccc95205c0e11a6ad1a7ee100f0ec46fcdf0af88e91fb false
check_key 0b4a78d1fde56181445f04ca4780f0725daa9c375b496fab6c037d6b2c2275db true
check_key 2f82d2a3c8ce801e1ad334f9e074a4fbf76ffac4080a7331dc1359c2b4f674a4 false
check_key 24297d8832d733ed052dd102d4c40e813f702006f325644ccf0cb2c31f77953f false
check_key 5231a53f6bea7c75b273bde4a9f673044ed87796f20e0909978f29d98fc8d4f0 true
check_key 94b5affcf78be5cf62765c32a0794bc06b4900e8a47ddba0e166ec20cec05935 true
check_key c14b4d846ea52ffbbb36aa62f059453af3cfae306280dada185d2d385ef8f317 true
check_key cceb34fddf01a6182deb79c6000a998742d4800d23d1d8472e3f43cd61f94508 true
check_key 1faffa33407fba1634d4136cf9447896776c16293b033c6794f06774b514744c true
check_key faaac98f644a2b77fb09ba0ebf5fcddf3ff55f6604c0e9e77f0278063e25113a true
check_key 09e8525b00bea395978279ca979247a76f38f86dce4465eb76c140a7f904c109 true
check_key 2d797fc725e7fb6d3b412694e7386040effe4823cdf01f6ec7edea4bc0e77e20 false
check_key bbb74dabee651a65f46bca472df6a8a749cc4ba5ca35078df5f6d27a772f922a false
check_key 77513ca00f3866607c3eff5c2c011beffa775c0022c5a4e7de1120a27e6687fd true
check_key 10064c14ace2a998fc2843eeeb62884fe3f7ab331ca70613d6a978f44d9868eb false
check_key 026ae84beb5e54c62629a7b63702e85044e38cadfc9a1fcabee6099ba185005c false
check_key aef91536292b7ba34a3e787fb019523c2fa7a0d56fca069cc82ccb6b02a45b14 false
check_key 147bb1a82c623c722540feaad82b7adf4b85c6ec0cbcef3ca52906f3e85617ac true
check_key fc9fb281a0847d58dc9340ef35ef02f7d20671142f12bdd1bfb324ab61d03911 false
check_key b739801b9455ac617ca4a7190e2806669f638d4b2f9288171afb55e1542c8d71 false
check_key 494cc1e2ee997eb1eb051f83c4c89968116714ddf74e460d4fa1c6e7c72e3eb3 true
check_key ed2fbdf2b727ed9284db90ec900a942224787a880bc41d95c4bc4cf136260fd7 true
check_key 02843d3e6fc6835ad03983670a592361a26948eb3e31648d572416a944d4909e true
check_key c14fea556a7e1b6b6c3d4e2e38a4e7e95d834220ff0140d3f7f561a34e460801 true
check_key 5f8f82a35452d0b0d09ffb40a1154641916c31e161ad1a6ab8cfddc2004efdf6 false
check_key 7b93d72429fab07b49956007eba335bb8c5629fbf9e7a601eaa030f196934a56 true
check_key 6a63ed96d2e46c2874beaf82344065d94b1e5c04406997f94caf4ccd97cfbab9 false
check_key c915f409e1e0f776d1f440aa6969cfec97559ef864b07d8c0d7c1163871b4603 true
check_key d06bc33630fc94303c2c369481308f805f5ce53c40141160aa4a1f072967617e false
check_key 1aafb14ca15043c2589bcd32c7c5f29479216a1980e127e9536729faf1c40266 true
check_key 58c115624a20f4b0c152ccd048c54a28a938556863ab8521b154d3165d3649cd false
check_key 9001ba086e8aa8a67e128f36d700cc641071556306db7ec9b8ac12a6256b27b7 false
check_key 898c468541634fb0def11f82c781341fce0def7b15695af4e642e397218c730c true
check_key 47ea6539e65b7b611b0e1ae9ee170adf7c31581ca9f78796d8ebbcc5cd74b712 false
check_key 0c60952a64eeac446652f5d3c136fd36966cf66310c15ee6ab2ecbf981461257 false
check_key 682264c4686dc7736b6e46bdc8ab231239bc5dac3f5cb9681a1e97a527945e8e true
check_key 276006845ca0ea4238b231434e20ad8b8b2a36876effbe1d1e3ffb1f14973397 true
check_key eecd3a49e55e32446f86c045dce123ef6fe2e5c57db1d850644b3c56ec689fce true
check_key a4dced63589118db3d5aebf6b5670e71250f07485ca4bb6dddf9cce3e4c227a1 false
check_key b8ade608ba43d55db7ab481da88b74a9be513fca651c03e04d30cc79f50e0276 false
check_key 0d91de88d007a03fe782f904808b036ff63dec6b73ce080c55231afd4ed261c3 true
check_key 87c59becb52dd16501edadbb0e06b0406d69541c4d46115351e79951a8dd9c28 true
check_key 9aee723be2265171fe10a86d1d3e9cf5a4e46178e859db83f86d1c6db104a247 false
check_key 509d34ae5bf56db011845b8cdf0cc7729ed602fce765e9564cb433b4d4421a43 false
check_key 06e766d9a6640558767c2aab29f73199130bfdc07fd858a73e6ae8e7b7ba23ba false
check_key 801c4fe5ab3e7cf13f7aa2ca3bc57cc8eba587d21f8bc4cd40b1e98db7aec8d9 false
check_key d85ad63aeb7d2faa22e5c9b87cd27f45b01e6d0fdc4c3ddf105584ac0a021465 false
check_key a7ca13051eb2baeb5befa5e236e482e0bb71803ad06a6eae3ae48742393329d2 true
check_key 5a9ba3ec20f116173d933bf5cf35c320ed3751432f3ab453e4a6c51c1d243257 false
check_key a4091add8a6710c03285a422d6e67863a48b818f61c62e989b1e9b2ace240a87 false
check_key bdee0c6442e6808f25bb18e21b19032cf93a55a5f5c6426fba2227a41c748684 true
check_key d4aeb6cdad9667ec3b65c7fbc5bfd1b82bba1939c6bb448a86e40aec42be5f25 false
check_key 73525b30a77f1212f7e339ec11f48c453e476f3669e6e70bebabc2fe9e37c160 true
check_key 45501f2dc4d0a3131f9e0fe37a51c14869ab610abd8bf0158111617924953629 false
check_key 07d0e4c592aa3676adf81cca31a95d50c8c269d995a78cde27b2a9a7a93083a6 false
check_key a1797d6178c18add443d22fdbf45ca5e49ead2f78b70bdf1500f570ee90adca5 true
check_key 0961e82e6e7855d7b7bf96777e14ae729f91c5bbd20f805bd7daac5ccbec4bab false
check_key 57f5ba0ad36e997a4fb585cd2fc81b9cc5418db702c4d1e366639bb432d37c73 true
check_key 82b005be61580856841e042ee8be74ae4ca66bb6733478e81ca1e56213de5c05 false
check_key d7733dcae1874c93e9a2bd46385f720801f913744d60479930dad7d56c767cdc false
check_key b8b8b698609ac3f1bd8f4965151b43b362e6c5e3d1c1feae312c1d43976d59ab true
check_key 4bba7815a9a1b86a5b80b17ac0b514e2faa7a24024f269b330e5b7032ae8c04e true
check_key 0f70da8f8266b58acda259935ef1a947c923f8698622c5503520ff31162e877b false
check_key 233eaa3db80f314c6c895d1328a658a9175158fa2483ed216670c288a04b27bc false
check_key a889f124fabfd7a1e2d176f485be0cbd8b3eeaafeee4f40e99e2a56befb665be true
check_key 2b7b8abc198b11cf7efa21bc63ec436f790fe1f9b8c044440f183ab291af61d6 true
check_key 2491804714f7938cf501fb2adf07597b4899b919cabbaab49518b8f8767fdc6a true
check_key 52744a54fcb00dc930a5d7c2bc866cbfc1e75dd38b38021fd792bb0ca9f43164 true
check_key e42cbf70b81ba318419104dffbb0cdc3b7e7d4698e422206b753a4e2e6fc69bb false
check_key 2faff73e4fed62965f3dbf2e6446b5fea0364666cc8c9450b6ed63bbb6f5f0e7 true
check_key 8b963928d75be661c3c18ddd4f4d1f37ebc095ce1edc13fe8b23784c8f416dfd false
check_key b1162f952808434e4d2562ffda98bd311613d655d8cf85dc86e0a6c59f7158bc true
check_key 5a69adcd9e4f5b0020467e968d85877cb3aa04fa86088d4499b57ca65a665836 true
check_key 61ab47da432c829d0bc9d4fdb59520b135428eec665ad509678188b81c7adf49 false
check_key 154bb547f22f65a87c0c3f56294f5791d04a3c14c8125d256aeed8ec54c4a06e true
check_key 0a78197861c30fd3547b5f2eabd96d3ac22ac0632f03b7afd9d5d2bfc2db352f true
check_key 8bdeadcca1f1f8a4a67b01ed2f10ef31aba7b034e8d1df3a69fe9aebf32454e0 false
check_key f4b17dfca559be7d5cea500ac01e834624fed9befae3af746b39073d5f63190d true
check_key 622c52821e16ddc63b58f3ec2b959fe8c6ea6b1a596d9a58fd81178963f41c01 true
check_key 07bedd5d55c937ef5e23a56c6e58f31adb91224d985285d7fef39ede3a9efb17 false
check_key 5179bf3b7458648e57dc20f003c6bbfd55e8cd7c0a6e90df6ef8e8183b46f99d true
check_key 683c80c3f304f10fdd53a84813b5c25b1627ebd14eb29b258b41cd14396ef41f true
check_key c266244ed597c438170875fe7874f81258a830105ca1108131e6b8fea95eb8ba true
check_key 0c1cdc693df29c2d1e66b2ce3747e34a30287d5eb6c302495634ec856593fe8e true
check_key 28950f508f6a0d4c20ab5e4d55b80565a6a539092e72b7eb0ed9fa5017ecef88 false
check_key 8328a2a5fcfc4433b1c283539a8943e6eb8cc16c59f29dedc3af2c77cfd56f25 true
check_key 5d0f82319676d4d3636ff5dc2a38ea5ec8aeaac4835fdcab983ab35d76b7967b false
check_key cafcc75e94a014115f25c23aaae86e67352f928f468d4312b92240ff0f3a4481 false
check_key 3e5fdd8072574218f389d018e959669e8ca4ef20b114ea7dce7bfb32339f9f42 true
check_key 591763e3390a78ccb529ceea3d3a97165878b179ad2edaa166fd3c78ec69d391 true
check_key 7a0a196935bf79dc2b1c3050e8f2bf0665f7773fc07511b828ec1c4b1451d317 false
check_key 9cf0c034162131fbaa94a608f58546d0acbcc2e67b62a0b2be2ce75fc8c25b9a false
check_key e3840846e3d32644d45654b96def09a5d6968caca9048c13fcaab7ae8851c316 false
check_key a4e330253739af588d70fbda23543f6df7d76d894a486d169e5fedf7ed32d2e2 false
check_key cfb41db7091223865f7ecbdda92b9a6fb08887827831451de5bcb3165395d95d true
check_key 3d10bd023cef8ae30229fdbfa7446a3c218423d00f330857ff6adde080749015 false
check_key 4403b53b8d4112bb1727bb8b5fd63d1f79f107705ffe17867704e70a61875328 false
check_key 121ef0813a9f76b7a9c045058557c5072de6a102f06a9b103ead6af079420c29 true
check_key 386204cf473caf3854351dda55844a41162eb9ce4740e1e31cfef037b41bc56e false
check_key eb5872300dc658161df469364283e4658f37f6a1349976f8973bd6b5d1d57a39 true
check_key b8f32188f0fc62eeb38a561ff7b7f3c94440e6d366a05ef7636958bc97834d02 false
check_key a817f129a8292df79eef8531736fdebb2e985304653e7ef286574d0703b40fb4 false
check_key 2c06595bc103447b9c20a71cd358c704cb43b0b34c23fb768e6730ac9494f39e true
check_key dd84bc4c366ced4f65c50c26beb8a9bc26c88b7d4a77effbb0f7af1b28e25734 false
check_key 76b4d33810eed637f90d49a530ac5415df97cafdac6f17eda1ba7eb9a14e5886 true
check_key 926ce5161c4c92d90ec4efc58e5f449a2c385766c42d2e60af16b7362097aef5 false
check_key 20c661f1e95e94a745eb9ec7a4fa719eff2f64052968e448d4734f90952aefee false
check_key 671b50abbd119c756010416e15fcdcc9a8e92eed0f67cbca240c3a9154db55c0 false
check_key df7aeee8458433e5c68253b8ef006a1c74ce3aef8951056f1fa918a8eb855213 false
check_key 70c81a38b92849cf547e3d5a6570d78e5228d4eaf9c8fdd15959edc9eb750daf false
check_key 55a512100b72d4ae0cfc16c75566fcaa3a7bb9116840db1559c71fd0e961cc36 false
check_key dbfbec4d0d2433a794ad40dc0aea965b6582875805c9a7351b47377403296acd true
check_key 0a7fe09eb9342214f98b38964f72ae3c787c19e5d7e256af9216f108f88b00a3 true
check_key a82e54681475f53ced9730ee9e3a607e341014d9403f5a42f3dbdbe8fc52e842 true
check_key 4d1f90059f7895a3f89abf16162e8d69b399c417f515ccb43b83144bbe8105f6 true
check_key 94e5c5b8486b1f2ff4e98ddf3b9295787eb252ba9b408ca4d7724595861da834 false
check_key d16e3e8dfa6d33d1d2db21c651006ccddbf4ce2e556594de5a22ae433e774ae6 false
check_key a1b203ec5e36098a3af08d6077068fec57eab3a754cbb5f8192983f37191c2df false
check_key 5378bb3ec8b4e49849bd7477356ed86f40757dd1ea3cee1e5183c7e7be4c3406 false
check_key 541a4162edeb57130295441dc1cb604072d7323b6c7dffa02ea5e4fed1d2ee9e true
check_key d8e86e189edcc4b5c262c26004691edd7bd909090997f886b00ed4b6af64d547 false
check_key 18a8731d1983d1df2ce2703b4c85e7357b6356634ac1412e6c2ac33ad35f8364 false
check_key b21212eac1eb11e811022514c5041233c4a07083a5b20acd7d632a938dc627de true
check_key 50efcfac1a55e9829d89334513d6d921abeb237594174015d154512054e4f9d1 true
check_key 9c44e8bcba31ddb4e67808422e42062540742ebd73439da0ba7837bf26649ec4 true
check_key b068a4f90d5bd78fd350daa129de35e5297b0ad6be9c85c7a6f129e3760a1482 false
check_key e9df93932f0096fcf2055564457c6dc685051673a4a6cd87779924be5c4abead true
check_key eddab2fc52dac8ed12914d1eb5b0da9978662c4d35b388d64ddf8f065606acaf true
check_key 54d3e6b3f2143d9083b4c98e4c22d98f99d274228050b2dc11695bf86631e89f true
check_key 6da1d5ef1827de8bbf886623561b058032e196d17f983cbc52199b31b2acc75b true
check_key e2a2df18e2235ebd743c9714e334f415d4ca4baf7ad1b335fb45021353d5117f true
check_key f34cb7d6e861c8bfe6e15ac19de68e74ccc9b345a7b751a10a5c7f85a99dfeb6 false
check_key f36e2f5967eb56244f9e4981a831f4d19c805e31983662641fe384e68176604a true
check_key c7e2dc9e8aa6f9c23d379e0f5e3057a69b931b886bbb74ded9f660c06d457463 true
check_key b97324364941e06f2ab4f5153a368f9b07c524a89e246720099042ad9e8c1c5b false
check_key eff75c70d425f5bba0eef426e116a4697e54feefac870660d9cf24c685078d75 false
check_key 161f3cd1a5873788755437e399136bcbf51ff5534700b3a8064f822995a15d24 false
check_key 63d6d3d2c21e88b06c9ff856809572024d86c85d85d6d62a52105c0672d92e66 false
check_key 1dc19b610b293de602f43dca6c204ce304702e6dc15d2a9337da55961bd26834 false
check_key 28a16d02405f509e1cfef5236c0c5f73c3bcadcd23c8eff377253941f82769db true
check_key 682d9cc3b65d149b8c2e54d6e20101e12b7cf96be90c9458e7a69699ec0c8ed7 false
check_key 0000000000000000000000000000000000000000000000000000000000000000 true
check_key 0000000000000000000000000000000000000000000000000000000000000080 true
check_key 0100000000000000000000000000000000000000000000000000000000000000 true
check_key 0100000000000000000000000000000000000000000000000000000000000080 false
check_key 0200000000000000000000000000000000000000000000000000000000000000 false
check_key 0200000000000000000000000000000000000000000000000000000000000080 false
check_key 0300000000000000000000000000000000000000000000000000000000000000 true
check_key 0300000000000000000000000000000000000000000000000000000000000080 true
check_key 0400000000000000000000000000000000000000000000000000000000000000 true
check_key 0400000000000000000000000000000000000000000000000000000000000080 true
check_key 0500000000000000000000000000000000000000000000000000000000000000 true
check_key 0500000000000000000000000000000000000000000000000000000000000080 true
check_key 0600000000000000000000000000000000000000000000000000000000000000 true
check_key 0600000000000000000000000000000000000000000000000000000000000080 true
check_key 0700000000000000000000000000000000000000000000000000000000000000 false
check_key 0700000000000000000000000000000000000000000000000000000000000080 false
check_key 0800000000000000000000000000000000000000000000000000000000000000 false
check_key 0800000000000000000000000000000000000000000000000000000000000080 false
check_key 0900000000000000000000000000000000000000000000000000000000000000 true
check_key 0900000000000000000000000000000000000000000000000000000000000080 true
check_key 0a00000000000000000000000000000000000000000000000000000000000000 true
check_key 0a00000000000000000000000000000000000000000000000000000000000080 true
check_key 0b00000000000000000000000000000000000000000000000000000000000000 false
check_key 0b00000000000000000000000000000000000000000000000000000000000080 false
check_key 0c00000000000000000000000000000000000000000000000000000000000000 false
check_key 0c00000000000000000000000000000000000000000000000000000000000080 false
check_key 0d00000000000000000000000000000000000000000000000000000000000000 false
check_key 0d00000000000000000000000000000000000000000000000000000000000080 false
check_key 0e00000000000000000000000000000000000000000000000000000000000000 true
check_key 0e00000000000000000000000000000000000000000000000000000000000080 true
check_key 0f00000000000000000000000000000000000000000000000000000000000000 true
check_key 0f00000000000000000000000000000000000000000000000000000000000080 true
check_key 1000000000000000000000000000000000000000000000000000000000000000 true
check_key 1000000000000000000000000000000000000000000000000000000000000080 true
check_key 1100000000000000000000000000000000000000000000000000000000000000 false
check_key 1100000000000000000000000000000000000000000000000000000000000080 false
check_key 1200000000000000000000000000000000000000000000000000000000000000 true
check_key 1200000000000000000000000000000000000000000000000000000000000080 true
check_key 1300000000000000000000000000000000000000000000000000000000000000 true
check_key 1300000000000000000000000000000000000000000000000000000000000080 true
check_key daffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key daffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key dbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key dbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key dcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key dcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key ddffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key ddffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key deffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key deffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key dfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key dfffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key e6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key e7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key e9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key e9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key eaffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key eaffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff true
check_key ebffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key ebffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key ecffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f true
check_key ecffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key edffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key edffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key eeffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key eeffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key efffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key efffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f0ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f1ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f2ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f3ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f4ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f5ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f6ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f7ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f8ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key f9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key f9ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key faffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key faffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key fbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key fbffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key fcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key fcffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key fdffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key fdffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key feffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
check_key ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff7f false
check_key ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff false
hash_to_ec da66e9ba613919dec28ef367a125bb310d6d83fb9052e71034164b6dc4f392d0 52b3f38753b4e13b74624862e253072cf12f745d43fcfafbe8c217701a6e5875
hash_to_ec a7fbdeeccb597c2d5fdaf2ea2e10cbfcd26b5740903e7f6d46bcbf9a90384fc6 f055ba2d0d9828ce2e203d9896bfda494d7830e7e3a27fa27d5eaa825a79a19c
hash_to_ec ed6e6579368caba2cc4851672972e949c0ee586fee4d6d6a9476d4a908f64070 da3ceda9a2ef6316bf9272566e6dffd785ac71f57855c0202f422bbb86af4ec0
hash_to_ec 9ae78e5620f1c4e6b29d03da006869465b3b16dae87ab0a51f4e1b74bc8aa48b 72d8720da66f797f55fbb7fa538af0b4a4f5930c8289c991472c37dc5ec16853
hash_to_ec ab49eb4834d24db7f479753217b763f70604ecb79ed37e6c788528720f424e5b 45914ba926a1a22c8146459c7f050a51ef5f560f5b74bae436b93a379866e6b8
hash_to_ec 5b79158ef2341180b8327b976efddbf364620b7e88d2e0707fa56f3b902c34b3 eac991dcbba39cb3bd166906ab48e2c3c3f4cd289a05e1c188486d348ede7c2e
hash_to_ec f21daa7896c81d3a7a2e9df721035d3c3902fe546c9d739d0c334ed894fb1d21 a6bedc5ffcc867d0c13a88a03360c8c83a9e4ddf339851bd3768c53a124378ec
hash_to_ec 3dae79aaca1abe6aecea7b0d38646c6b013d40053c7cdde2bed094497d925d2b 1a442546a35860a4ab697a36b158ded8e001bbfe20aef1c63e2840e87485c613
hash_to_ec 3d219463a55c24ac6f55706a6e46ade3fcd1edc87bade7b967129372036aca63 b252922ab64e32968735b8ade861445aa8dc02b763bd249bff121d10829f7c52
hash_to_ec bc5db69aced2b3197398eaf7cf60fd782379874b5ca27cb21bd23692c3c885cc ae072a43f78a0f29dc9822ae5e70865bbd151236a6d7fe4ae3e8f8961e19b0e5
hash_to_ec 98a6ed760b225976f8ada0579540e35da643089656695b5d0b8c7265a37e2342 6a99dbfa8ead6228910498cc3ff3fb18cb8627c5735e4b8657da846c16d2dcad
hash_to_ec e9cdc9fd9425a4a2389a5d60f76a2d839f0afbf66330f079a88fe23d73eae930 8aa518d091928668f3ca40e71e14b2698f6cae097b8120d7f6ae9afba8fd3d60
hash_to_ec a50c026c0af2f9f9884c2e9b8464724ac83bef546fec2c86b7de0880980d24fb b07433f8df39da2453a1e13fd413123a158feae602d822b724d42ef6c8e443bf
hash_to_ec bf180e20d160fa23ccfa6993febe22b920160efc5a9614245f1a3a360076e87a 9d6454ff69779ce978ea5fb3be88576dc8feaedf151e93b70065f92505f2e800
hash_to_ec b2b64dfeb1d58c6afbf5a56d8c0c42012175ebb4b7df30f26a67b66be8c34614 0523b22e7f220c939b604a15780abc5816709b91b81d9ee1541d44bd2586bbd8
hash_to_ec 463fc877f4279740020d10652c950f088ebdebeae34aa7a366c92c9c8773f63a daa5fa72e70c4d3af407b8f2f3364708029b2d4863bbdde54bd67bd08db0fcad
hash_to_ec 721842f3809982e7b96a806ae1f162d98ae6911d476307ad1e4f24522fd26f55 4397c300a8cfcb42e7cc310bc975dc975ec2d191eaa7e0462998eb2830c34126
hash_to_ec 384da8d9b83972af8cbefc2da5efc744037c8ef40efa4b3bacc3238a6232963d 3c80f107e6868f73ef600ab9229a3f4bbe24f4adce52e6ab3a66d5d510e0670d
hash_to_ec e26f8adef5b6fe5bb01466bff0455ca23fda07e200133697b3b6430ca3332bde e262a58bcc1f8baf1980e00d5d40ba00803690174d14fb4c0f608429ce3df773
hash_to_ec 6e275b4ea4f085a5d3151aa08cf16a8c60b078e70be7ce5dac75b5d7b0eebe7c cb21b5a7744b4fcdc92ead4be0b04bcb9145e7bb4b06eff3bb2f0fe429b85108
hash_to_ec a0dde4561ad9daa796d9cd8a3c34fd41687cee76d128bf2e2252466e3ef3b068 79a2eb06bb7647f5d0aae5da7cf2e2b2d2ce890f25f2b1f81bfc5fef8c87a7d3
hash_to_ec dbaf63830e037b4c329969d1d85e58cb6c4f56014fd08eb38219bd20031ae27c 079c93ae27cd98075a487fd3f7457ad2fb57cdf12ec8651fedd944d765d07549
hash_to_ec 1e87ba8a9acf96948bc199ae55c83ab3277be152c6d0b1d68a07955768d81171 5c6339f834116791f9ea22fcc3970346aaeddacf13fbd0a7d4005fbd469492ca
hash_to_ec 5a544088e63ddf5b9f444ed75a75bc9315c4c50439522f06b4823ecaf5e8a08d e95ca0730d57c6469be3a0f3c94382f8490257e2e546de86c650bdbc6482eaee
hash_to_ec e4e06d92ebb036a5e4bb547dbaa43fd70db3929eef2702649455c86d7e59aa46 e26210ff8ee28e24ef2613df40aa8a874b5e3c1d07ae14acc59220615aa334dc
hash_to_ec 5793b8b32dcc0f204501647f2976493c4f8f1fa5132315226f99f29a5a6fdfce 656e390086906d99852c9696e831f62cb56fc8f85f9a5c936c327f23c7faf4fe
hash_to_ec 84f56fa4d7f12e0efd48b1f7c81c15d6e3843ebb419f4a27ec97028d4f9da19e 0cbd4f0cd288e1e071cce800877de6aef97b63fff867424a4f2b2bab25602608
hash_to_ec 242683ddf0a9fc55f6585de3aa64ea17c9c544896ff7677cd82c98f833bdf2ca 38c36d52314549213df7c7201ab7749a4724cbea92812f583bb48cabc20816ad
hash_to_ec a93ee320dc030aa382168c2eb6d75fce6e5a63a81f15632d514c6de8a7cfa5ee bd0a2facaa95bc95215a94be21996e46f789ee8beb38e75a1173b75fc686c505
hash_to_ec e36136601d84475d25c3f14efe030363d646658937a8a8a19a812d5e6deb5944 2fb93d78fae299c9f6b22346acfb829796ee7a47ec71db5456d8201bec6c35a3
hash_to_ec ba4b67d3d387c66baa4a32ec8b1db7681087e85076e71bab10036388c3aeb011 cc01329ce56f963bf444a124751c45b2c779ccb6dea16ca05251baca246b5401
hash_to_ec 3fbc91896a2585154d6f7094c5ab9c487e29a27951c226eec1235f618e44946b 7d983acbb901bf5497d0708392e5e742ec8c8036cbb0d03403e9929da8cc85a7
hash_to_ec a2da289fed650e9901f69a5f33535eb47c6bd07798633cbf6c00ce3172df76ac dca8a4d30ec2d657fefd0dba9c1c5fd45a79f665048b3cf72ac2c3b7363da1ac
hash_to_ec 99025d2d493f768e273ed66cacd3a5b392761e6bd158ca09c8fba84631ea1534 7ef5af79ab155ab7e1770a47fcd7f194aca43d79ec6e303c7ce18c6a20279b04
hash_to_ec 3cf1d01d0b70fb31f2a2f979c1bae812381430f474247d0b018167f2a2cd9a9f 7c53d799ec938a21bb305a6b5ca0a7a355fa9a68b01d289c4f22b36ce3738f95
hash_to_ec 639c421b49636b2a1f8416c5d6e64425fe51e3b52584c265502379189895668e 0b47216ae5e6e03667143a6cf8894d9d73e3152c64fb455631d81a424410e871
hash_to_ec 4ccf2c973348b7cc4b14f846f9bfcdcb959b7429accf6dede96248946841d990 7fd41f5b97ba42ed03947dd953f8e69770c92cc34b16236edad7ab3c78cbbb2e
hash_to_ec f76ae09fff537f8919fd1a43ff9b8922b6a77e9e30791c82cf2c4b8acb51363e 8e2c6bf86461ad2c230c496ee3896da33c11cc020fd4c70faa3645b329049234
hash_to_ec 98932da7450f15db6c1eef78359904915c31c2aa7572366ec8855180edb81e3a 86180adddfac0b4d1fb41d58e98445dde1da605b380d392e9386bd445f1d821c
hash_to_ec ab26a1660988ec7aba91fc01f7aa9a157bbc12927f5b197062b922a5c0c7f8dd 2c44a43eda0d0aad055f18333e761f2f2ec11c585ec7339081c19266af918e4f
hash_to_ec 4465d0c1b4930cc718252efd87d11d04162d2a321b9b850c4a19a6acdfca24f4 b03806287d804188a4d679a0ecee66f399d7bdc3bd1494f9b2b0772bbb5a034f
hash_to_ec 0f2a7867864ed00e5c40082df0a0b031c89fa5f978d9beb2fde75153f51cfb75 5c471e1b118ef9d76c93aec70e0578f46e8db1d55affd447c1f64c0ad9a5caa5
hash_to_ec 5c2808c07d8175f332cae050ce13bec4254870d76abff68faf34b0b8d3ad5000 eeff1d9a5aa428b7aecc575e63dde17294072eb246568493e1ed88ce5c95b779
hash_to_ec 36300a21601fad00d00da45e27b36c11923b857f97e50303bd01f21998eaef95 b33b077871e6f5dad8ff6bc621c1b6dedcf700777d996c8c02d73f7297108b7e
hash_to_ec 9e1afb76d6c480816d2cedd7f2ab08a36c309efaa3764dcdb51bad6049683805 4cd96ba7b543b1a224b8670bf20b3733e3910711d32456d3e58e920215788adf
hash_to_ec 685f152704664495459b76c81567a4b571e8b307dd0e3c9b08ee95651a006047 80dd6b637580cb3be76025867f1525852b65a7a66066993fda3af7eb187dc1a5
hash_to_ec 0b216444391a1163c14f7b27f9135e9747978c0e426dce1fa65c657f3e9146be 021259695a6854a4a03e8c74d09ab9630a401bfca06172a733fe122f01af90b4
hash_to_ec cfcb35e98f71226c3558eaa9cf620db5ae207ece081ab13ddea4b1f122850a5a 46763d2742e2cdffe80bb3d056f4d3a1565aa83f19aab0a1f89e54ad81ae0814
hash_to_ec 07e7292da8cdcdb58ee30c3fa16f1d609e9b3b1110dd6fa9b2cc18f4103a1c12 fe949ca251ac66f13a8925ae624a09cdbf6696d3c110442338d37700536e8ec7
hash_to_ec 813bc7e3749e658190cf2a4e358bc07a6671f262e2c4eef9f44c66066a72e6a7 6b92fbda984bd0e6f4af7a5e04c2b66b6f0f9d197a9694362a8556e5b7439f8a
hash_to_ec 89c50a1e5497156e0fae20d99f5e33e330362b962c9ca00eaf084fe91aaec71d ef36cb75eb95fb761a8fa8c376e9c4447bcd61421250f7a711bd289e6ed78a9b
hash_to_ec d9bd9ff2dd807eb25de7c5de865dbc43cce2466389cedbc92b90aab0eb014f81 30104771ff961cd1861cd053689feab888c57b8a4a2e3989646ea7dea40f3c04
hash_to_ec b8c837501b6ca3e118db9848717c847c062bf0ebeca5a7c211726c1426878af5 19a1e204b4a32ce9cccf5d96a541eb76a78789dceaf4fe69964e58ff96c29b63
hash_to_ec 84376c5350a42c07ac9f96e8d5c35a8c7f62c639a1834b09e4331b5962ecace8 ba1e4437d5048bd1294eadc502092eafc470b99fde82649e84a52225e68e88f2
hash_to_ec a3345e4a4cfc369bf0e7d11f49aed0d2a6ded00e3ff8c7605db9a919cf730640 0d318705c16e943c0fdcde134aaf6e4ccce9f3d9161d001861656fc7ea77a0b1
hash_to_ec 3c994dfb9c71e4f401e65fd552dc9f49885f88b8b3588e24e1d2e9b8870ffab1 984157de5d7c2c4b43b2bffea171809165d7bb442baea88e83b27f839ebdb939
hash_to_ec 153674c1c1b18a646f564af77c5bd7de452dc3f3e1e2326bfe9c57745b69ec5c e9a4a1e225ae472d1b3168c99f8ba1943ad2ed84ef29598f3f96314f22db9ef2
hash_to_ec 2d46a705d4fe5d8b5a1f4e9ef46d9e06467450eb357b6d39faa000995314e871 b9d1aec540bf6a9c0e1b325ab87d4fbe66b1df48986dde3cb62e66e136eba107
hash_to_ec 6764c3767f16ec8faecc62f9f76735f76b11d7556aeb61066aeaeaad4fc9042f 3a5c68fb94b023488fb5940e07d1005e7c18328e7a84f673ccd536c07560a57b
hash_to_ec c99c6ee5804d4b13a445bc03eaa07a6ef5bcb2fff0f71678dd3bd66b822f8be8 a9e1ce91deed4136e6e53e143d1c0af106abde9d77c066c78ebbf5d227f9dde0
hash_to_ec 3009182e1efac085c7eba24a7d9ef28ace98ebafa72211e73a41c935c37e6768 e55431a4c89d38bd95f8092cdf6e44d164ad5855677aba17ec262abc8c217c86
hash_to_ec e7153acd114a7636a207be0b67fa86fee56dd318f2808a81e35dd13d4251b2d0 ff2b98d257e4d4ff7379e8871441ca7d26e73f78f3f5afcf421d78c9799ba677
hash_to_ec 6378586744b721c5003976e3e18351c49cd28154c821bc45338892e5efedd197 3d765fb7bb4e165a3fa6ea00b5b5e22250f3861f0db0099626d9a9020443dda2
hash_to_ec 5be49aba389b7e3ad6def3ba3c7dbec0a11a3c36fc9d441130ef370b8a8d29c2 2d61faf38062dc98ae1aaafec05e90a925c9769df5b8b8f7090d9e91b2a11151
hash_to_ec f7bc382178d38e1b9a1a995bd8347c1283d8a2e8d150379faa53fd125e903d2b 544c815da65c3c5994b0ac7d6455578d03a2bc7cf558b788bcdb3430e231635a
hash_to_ec c28b5c4b6662eebb3ec358600644849ebeb59d827ed589c161d900ca18715fa8 a2d64db3c0e0353c257aadf9abc12ac779654d364f348b9f8e429aa7571203db
hash_to_ec 3a4792e5df9b2416a785739b9cf4e0d68aef600fa756a399cc949dd1fff5033a 4b54591bd79c30640b700dfb7f20158f692f467b6af70bd8a4e739c14a66c86a
hash_to_ec 002e70f25e1ceaf35cc14b2c6975a4c777b284a695550541e6f5424b962c19f5 73987e9342e338eb57a7a9e03bd33144db37c1091e952a10bd243c5bb295c18a
hash_to_ec 7eb671319f212c9cae0975571b6af109124724ba182937a9066546c92bdeff0c 49b46da3be0df1d141d2a323d5af82202afa2947a95b9f3df47722337f0d5798
hash_to_ec ca093712559c8edd5c51689e2ddcb8641c2960e5d9c8b03a44926bb798a0c8dc b9ef9cf0f8e4a3d123db565afafb1102338bfb75498444ac0a25c5ed70d615da
hash_to_ec cfea0a08a72777ff3aa7be0d8934587fa4127cd49a1a938232815dc3fd8b23ac b4de604b3d712f1ef578195fb0e53c865d41e2dfe425202c6cfe6f10e4404eb5
hash_to_ec aa0122ae258d6db21a26a31c0c92d8a0e3fdb46594aed41d561e069687dedcd6 5247eaec346de1c6cddf0ab04c12cd1d85cdb6d3a2fba2a5f9a5fe461abef5eb
hash_to_ec b3941734f4d3ba34ccaf03c4c737ac5a1e036eb74309300ce44d73aca24fef08 535938985c936e3780c61fe29a4121d6cb89a05080b6c2147031ea0c2b5b9829
hash_to_ec 8c2ee1041a2743b30dcbf413cc9232099b9268f82a5a21a09b63e7aff750882f 6ad0d4b3a65b522dfad0e9ac814b1fb939bc4910bd780943c72f57f362754cca
hash_to_ec 4b6829a2a2d46c8f0d0c23db0f735fcf976524bf39ccb623b919dd3b28ad5193 2e0097d7f92993bc45ba06baf4ca63d64899d86760adc4eb5eeefb4a78561050
hash_to_ec 9c1407cb6bba11e7b4c1d274d772f074f410d6fe9a1ee7a22cddf379257877d9 692261c7d6a9a7031c67d033f6d82a68ef3c27bd51a5666e55972238769821cd
hash_to_ec 638c42e4997abf8a4a9bffd040e31bd695d590cde8afbd7efd16ffdbae63bf66 793024c8ce196a2419f761dde8734734af6bd9eb772b30cc78f2cb89598dce97
hash_to_ec 1fb60d79600de151a1cf8a2334deb5828632cbd91cb5b3d45ae06e08187ae23d ff2542cde5bc2562e69471a31cfc3d0c26e2f6ccc1891a633b07a3968e42521c
hash_to_ec d2fdbbae4e38a1b734151c3df52540feb2d3ff74edfef2f740e49a5c363406ee 344c83ba6ff4e38b257077623d298d2f2b52002645021241bc9389f81b29ad12
hash_to_ec 836c27a6ddfe1a24aba3d6022dff6dfe970f142d8b4ac6afb8efcba5a051942f b8af481d33726b3f875268282d621e4c63f891a09f920b8f2f49080f3a507387
hash_to_ec 46281153ddcdf2e79d459693b6fe318c1969538dd59a750b790bfff6e9481abf 8eaf534919ab6573ba4e0fbde0e370ae01eae0763335177aa429f61c4295e9d4
hash_to_ec d57b789e050bf3db462b79a997dac76aa048d4be05f133c66edee56afd3dbe66 0c5a294cb2cbb6d9d1c0a1d57d938278f674867f612ed89dcbe4533449f1a131
hash_to_ec 548d524d03ac22da18ff4201ce8dbee83ad9af54ee4e26791d26ed2ab8f9bfc7 c6609d9e7d9fd982dec8a166ff4fb6f7d195b413aad2df85f73d555349134f3b
hash_to_ec cc920690422e307357f573b87a6e0e65f432c6ec12a604eb718b66ba18897a56 6f11c466d1c72fccd81e51d9bda03b6e8d6a395e1d931b2a84e392dc9a3efa18
hash_to_ec c7fb8a51f5fcd8824fc0875d4eb57ab4917cb97090a6e2288f852f2bb449edd9 45543fea6eed461016e48598b521f18ff70178afea18032b188deea3e56052fc
hash_to_ec c681bb1b829e24b1c52cb890036b89f0029d261c6a15e5b2c684ee7dfe91e746 263006fe2c6b08f1ab29cdf442472c298e2faf225bbf5c32399d3745cd3904bd
hash_to_ec e06411c542312fdd305e17e46be14c63bab5836dc8751da06164b1ae22d4e20f 901871be7a7ff5aecade2acff869846f3c50de69307ac155f2aa3a74d5472ef2
hash_to_ec 9c725a2acb80fa712f9781da510e5163b1b30f4e1c064c26b5185e537f0614ea 02420d49257846eb39fddd196d3171679f6be21d9adac667786b65a6e90f57b1
hash_to_ec 22792772820feafa85c5cb3fa8f876105251bef08617d389619697f47dff54f2 a3ad444e7811693687f3925e7c315ae55d08d9f4b0a29876bc2a891ab941c1c3
hash_to_ec 0587b790121395d0f4f39093d10b4817f58a1e80621a24eea22b3c127d6ac5a2 86c417c695c64c7becaad0d59ddbb2bca4cb2b409a21253d680aac1a08617095
hash_to_ec fa0b5f28399bef0cd87bfe6b8a2b69e9c5506fb4bacd22deba8049615a5db526 ede0ea240036ff75d075258a053f3ce5d6f77925d358dbe33c06509fc9b12111
hash_to_ec 62a3274fc0bed109d5057b865c2ba6b6a5a417cb90a3425674102fcd457ede2d ff7e46751bb4dcd1e800a8feab7cf6771f42dc0cfed7084c23b8a5d255a6f34e
hash_to_ec a6fcd4aecaaaf281563b9b7cd6fbc7b1829654f644f4165942669a2ef632b2bf 28f136be0eb957a5b36f8ec294399c9f73ad3a3c9bb953ad191758ced554a233
hash_to_ec 01baa4c06d6676c9b286cda76ed949fd80a408b3309500ba84a5bb7e3dce58e2 a943d1afa2efce284740e7db21ea02db70b124808be2ff80cbf9b9cb96c7b73e
hash_to_ec dd9aff9c006ba514cef8fae665657bc9813fe2715467cf479643ea4c4e365d6d 68de2f7d49de4004286ce0989a06a686b15d0f463a02ffd448a18914e1ddf713
hash_to_ec 3df3513d5e539161761ce7992ab9935f649bc934bed0da3c5e1095344b733bb9 e9c2dd747d7b2482474325943cd850102b8093164678362c7621993a790e2a8a
hash_to_ec 7680cfb244dc8ef37c671fff176be1a3dad00e5d283f93145d0cbee74cca2df4 a0fd8c3cca16a130eaa5864cbe8152b7adfbf09e8cf72244b2fc8364c3b20bf4
hash_to_ec 8a547c38bd6b219ea0d612d4a155eba9c56034a1405dcf4b608de787f37e0fd8 76bf0dc40fd0a5508c5e091d8bb7eccfa28b331e72c6a0d4ac0e05a3d651850b
hash_to_ec dd93901621f58465e9791012afa76908f1e80ad80e52b809dc7fc32bb004f0a8 09a0b7ecfe8058b1e9ee01c9b523826867ca97a32efad29ac8ceebca67a4ea00
hash_to_ec b643010220f1f4ee6c7565f6e1b3dc84c18274ede363ac36b6af3707e69a1542 233c9ff8de59e5f96c2f91892a71d9d93fa7316319f30d1615f10ac1e01f9285
hash_to_ec c2637b2299dfc1fd7e953e39a582bafd19e6e7fff3642978eb092b900dbfea80 339587ba1c05e2cba44196a4be1fd218b772199e2c61c3c0ff21dcd54b570c43
hash_to_ec 1f36d3a7e7c468eb000937de138809e381ad2e23414cbbaac49b7f33533ed486 7e5b0a96051c77237a027a79764c2763487af88121c7774645e97827fb744888
hash_to_ec 8c142a55f60b2edbe03335b7f90aa2bd63e567048a65d61c70cb28779c5200af d3d6d5563b3d81c8c91cf9806bb13b2850fb7c162c610fd2f5b83c464add8182
hash_to_ec 99e7b98293c9de1f81aff1376485a990014b8b176521b2a68cdbde6300190398 119cbc01a1d9b9fb4759031d3a70685aebea0f01bc5ee082ce824265fd21b3b4
hash_to_ec 9753bd38be072b51490290be6207ca4545e3541bdf194e0850ae0a9f9e64b8ba 1ad3aa759863153606fa6570f0e1290baded4c8c1f2ba0f67c1911bfc8ccd7a0
hash_to_ec 322703864ceee19b7f17cec2a822f310f0c4da3ff98b0be61a6fd30ac4db649c 89d9e7a5947e1cde874e4030de278070aae363063cd3592ce5411821474f0816
hash_to_ec c1acd01e1e535fad273a8b757d981470f43dd7d95af732901fbba16b6e245761 57e80445248111150da5e63c706b4abbf3eef2cc508bd0347ff6b81e8c59f5bc
hash_to_ec 492473559f181bbe78f60215bc6d3a5168435ea2fc0a508372d6f5ca126e9767 df3965f137cf6f60c56ebd7c8f246281fd6dc92ce23a37e9f846f8452c884e01
hash_to_ec afa9d6e0e2fb972ee806beb450c2c0165e58234b0676a4ec0ca19b6e710d7c35 669a57e69dd2845a5e50ed8e5d8423ac9ae792a43c7738554d6c5e765a7b088a
hash_to_ec 094de050bdadef3b7dbaeeca29381c667e63e71220970149d97b95db8f4db61b 0cf5d03530c5e97850d0964c6a394de9cde1e8e498f8c0e173c518242c07f99a
hash_to_ec 2ce583724bc699ad800b33176a1d983512fe3cb3afa65d99224b23dae223efb7 e1548fd563c75ae5b5366dbab4cb73c54e7d5e087c9e5453125ff8fbe6c83a5c
hash_to_ec 8064974b976ff5ef6adaade6196ab69cda6970cd74f7f5899181805f691ad970 98ae63c47331a4ac433cb2f17230c525982d89d21e2838515a36ec5744ec2d15
hash_to_ec 384911047de609c6ae8438c745897357989363885cef2381a8a00a090cf04a58 4692ec3a0a03263620841c108538d584322fdd24d221a74bf1e1f407f83828af
hash_to_ec 0e1b1ced5ae997ef9c10b72cfc6d8c36d7433c01fc04f4083447f87243282528 6ee443ab0637702b7340bd4a908b9e2e63df0cc423c409fb320eb3f383118b80
hash_to_ec 5a7aea70c85c040af6ff3384bcaa63ec45c015b55b44fffa37ab982a00dc57c5 2df2e20137cefd166c767646ecd2e386d28f405aebe43d739aa55beba04ed407
hash_to_ec 3e878a3567487f20f7c98ea0488a40b87f1ba99e50bbfe9f00a423f927cbd898 697c7e60e4bf8c429ba7ac22b11a4b248d7465fc6abe597ec6d1e1c973330688
hash_to_ec c0bb08350d8a4bb6bf8745f6440e9bd254653102a81c79d6528da2810da758e4 396a872ac9147a69b27223bf4ec4198345b26576b3690f233b832395f2598235
hash_to_ec 6c3026a9284053a4ddb754818f9ae306ffa96eb7003bd03826eeccc9a0cf656e bef73da51d3ba9972a33d1afb7d263094b66ab6dbe3988161b08c17f8c69c2d5
hash_to_ec f80b7d8f5a80d321af3a42130db199d9edcb8f5a82507d8bfca6d002d65458b6 aa59c167ea60ee024421bfbd00adbb3cbfc20e16bd3c9b172a6bef4d47ca7f57
hash_to_ec bc0ffc24615aa02fafef447f17e7b776489cd2cc909f71e8344e01cad9f1610d 5c4195cc8dc3518143f06a9c228ae59ec9a6425a8fab89bfc638ad997cf35220
hash_to_ec b15fad558737229f8816fcba8fbef805bd420c03e392d118c69bdf01890c4924 f5810477e37554728837f097e1b170d1d8c95351c7fff8abbbfc624e1a50c1b9
hash_to_ec ec8c1f10d8e9da9cf0d57c4a1f2c402771bed7970109f3cf21ad32111f1f198f a697e0a3f09827b0cf3a4ffb6386388feda80d30ffffcbd54443dafcba162b28
hash_to_ec a989647bf0d70fdb7533b8c303a2a07f5e42e26a45ffc4e48cff5ba88643a201 450fd73e636f94d0d232600dd39031386b0e2ecde4105124fc451341da9803db
hash_to_ec 7159971b03c365480d91d625a0fadc8e3a632c518acf0dbec87dd659da70e168 377bc43c038ac46cf6565aa0a6d6bf39968c0c1142755dba3141eeebf0acdf5d
hash_to_ec e39089a64fedac4b2c25e36312b33f79d02bf75a883f450f910915b8560a3b06 77efa7db1be020e77596f550de45626824a8268095d56a0991696b211cb329cc
hash_to_ec 2056b3c6347611bb0929dad00ec932a4d9bec0f06b2d57f17e01ffa1528a719e b6072c2be2ce928e8cbbb87e8eb7e06975c0f93b309dd3b6a29edaad2b56f99b
hash_to_ec 2c026793146e81b889fc741d62e06c341ce263560d57cd46d0376f5b29174489 8f1f64b67762aa784969e954c196a2c6610addc3604aa3291eb0b80304dfe9ef
hash_to_ec be6026d6704379c489fa7749832b58bdb1a9685a5ffb68c438537f2f76e0011f 0072569a4090a9ad383a205bb092196c9de871c22506e3bb63d6b9d1b2357c96
hash_to_ec f4db802d5c6b7d7b53663b03d988b4cd0c7cad6c26612c5307754a93ebdc9710 f21bc9be4cb28761f6fe1d0a555ad5e9748375a2e9faea25a1df75cc8d273e18
hash_to_ec c27d79a564c56b00956a55090481e85fbc837fd5fb5e8311ecb436e300c07e3a 1b1891e6abec74621501450cd68bb1eeaa5b2fffff4ec441a55d1235ff3a0842
hash_to_ec a1e2f93c717cad32af386efa624198973df5a710963dd19d4c3ac40032a3a286 69c60571e3f9f63d2bfb359386ae3b8cd9e49a2e9127753002866e85c0443573
hash_to_ec 76920d7b1763474bc94a16433c3c28241a9acdee3ff2b2cb0e6757ba415310aa c1b409169f102b696fc7fa1aa9c48631e58e08b5132b6aadf43407627bb1b499
hash_to_ec 57ac654b29fa227c181fff2121491fcb283af6cbe932c8199c946862c0e90cb2 a204e8d327ea93b0b1bd74a78ffc370b20cea6455e209f2bc258114baa16d728
hash_to_ec 88e66cfaef6432b759c50efce885097d1752252b479dac5ed822fa6c85d56427 6fb84790d3749a5c1088209ee3823848d9c19bf1524215c44031143dd8080d70
hash_to_ec c1e55da929c4f8f793696fc77ff4e1c317c34852d98403bfd15dd388ee7df0df 2f41e76f15c5b480665bd84067e3b543b85ce6de02be9da7a550b5e1ead94d34
hash_to_ec 29e9ace5aa3c5a572b13f4b62b738a764d90c8c293ccb062ad798acbab7c5ef4 bce791aba1edc2a66079628fd838799489ab16b0a475ce7fe62e24cc56fe131c
hash_to_ec f25b2340689dadacaa9a0ef08aee8447d80b982e8a1ea42cf0500a1b9d85b37d f7f53aa117e6772a9abc452b3931b0a99405ac45147e7c550ac9fcf7ffe377b5
hash_to_ec 0cb6c47fc8478063b33f5aed615a05bcc84d782c497b6cc8e76ec1fa11edbfdb 7a0b58b03147e7c9be1d98de49ead2ce738d0071b0af8ca03cc92ceb26fc2246
hash_to_ec 7bd7287d7c4b596fe46fe57a6982c959653487bea843a77dd47d40986200d576 343084618c58284c64a5ff076f891be64885dc2ac73fa1567f7b39fde6b91542
hash_to_ec e4984bf330708152254fb18ecef12d546afd24898a3cf00fba866957b6ee1b82 c70e88b061656181fbd6ff12aca578fb66de5553c756ea4698a248b177185bc6
hash_to_ec cefd6c3cb9754ea632d6aea140af017de5ea12e5184f868936b74d9aa349d603 4b476502a8a483aadd50667f262f95351901628dd3a2aac1a5a41c4ea03f1647
hash_to_ec da5d0f33344ee7f3345204badf183491b9452b84bccc907602c7bad43e5cf43e 9561b9e61241625e028361494d4fa5cd78df4c7219fa64c8fede6d8421b8904a
hash_to_ec d6f0a4f8c770a1274a76fd7ae4e5faf7779249263e1aaecc6f815cf376f5c302 cd5c55820be10f0d38feb81363ede3716a9168601a0dd1ce3109aab81367d698
hash_to_ec b6bf32491d12a41c275d8518fc534d9a0d17aade509e7e8b8409a95c86167307 4aae534abbd67a9a8f2974154606c0e9be8932e920c7a5e931b46a92859acf82
hash_to_ec 0f930beaad041f9cefd867bc194027dd651fb3c9bda5944ececdba8a7136b6d3 521708f8149891b418d0920369569a9d578029c78f8e41c68a0bb68d3ad5df60
hash_to_ec 49b1fe0f97be74b81e0b047027b3e9f726fa5e90a67dafa877309397291c06c5 0852e59dfae5ec32cce606c119376597bce5cd4d04879d329f74e3ec66414cd3
hash_to_ec 4d57647d03f2cfbd4782fcc933e0683b52d35fc8d37283e6c7de522ddfa7e698 cbeb9ebfbbc49ec81fac3b7b063fecac1bb40ea686d3ffb08f82b291715cd87f
hash_to_ec 4ea3238c06fc9346c7421ff85bc0244b893860b94bc437378472814d09b2e99f a1fbae941adc344031bbdf53385dfdc012311490a4eb5e9a2749a21b27ce917a
hash_to_ec 0cd3609f5c78b318cb853d189b73b1ee2d00edd4e5fce2812027daa3fcb1fed1 0c7a7241b16e3c47d41f5abbf205797bd4b63fc425a7120cb2a4bf324e08ae74
hash_to_ec d74ab71428e36943c9868f70d3243469babd27988a1666a06f499a5741a52e3e 65b7c259f3b4547c082b2a7669b2b363668c4d87ac14e80471317b03b34e5216
hash_to_ec f6b151998365e7d69bcbce383dd2e8b5bf93b8b72f029ff942588208c1619591 6ce840ce5dfbca238665c1e6eddb8b045aa85c69b5976fc55ab57e66d3d0a791
hash_to_ec 207751de234b2bd7ec20bdd8326210c23aa68f04875c94ad7e256a96520f25d6 fc8f79ab3af317c38bfb88f40fb84422995a0479cfa6b03fa6df7f4e5f2813fb
hash_to_ec 62291e2873f38c0a234b77d1964205f3f91905c261d3c06f81051a9b0cb787cb 076d1d767457518e6777cb3bd4df22c8a19eb617e4bbccd1b0bd37522d6597a5
hash_to_ec 4b060df2d2854036751d00190ee821cb0066d256d4172539fdfa6fbd1cdfe1f9 59866e927c69e7de5df00dc46c0d2a1ddf799d901128ff040cebb8fd61b95da4
hash_to_ec ac8daf73f9c609bb36bce4fdeec1e50be5f22de38c3904fabcf758f0fc180bc7 7d8dc4e956363b652468a5fecafd7c08d48a2297e93b8edcb38e595fdd5a1fde
hash_to_ec fef7b6563fd27f3aab1d659806b26b8f2ec38bc8feefad50288383c001d1c20f e6e42547f12df431439d45103d2c5a583248f44554a98a3a433cf8c38b11805d
hash_to_ec 40a3d6871c76ecc6bb7b28324478733e196cc11d062dd4c9265cf31be5cf5a97 8c55a3811c241a020b1be202a58d5defbc4c8945d73b132570b47dd7c019ccf0
hash_to_ec 0cd71e7e562b2b47f4bc8640caf20e69d3a62f10231b4c7a372c9691cff9ac3c fb8e4e3de479b3bf1f4f13b4ed5507df1e80bd9250567b9d021b03339d6e7197
hash_to_ec 40a4e62800a99b7a26e0b507ffb29592e5bdba25284dc473048f24b27d25b40a 90ae131d29ee4a71cd764ab26f1ca4e6d09a40db98f8692b345c3a0e130dc860
hash_to_ec 1ddf35193cf52860bfe3e41060a7f44281241c6ae49cd541d24c1aca679b7501 3b4f50013895c522776ced456329c4e727de03575f6b99ae7d238a9f70862121
hash_to_ec 014e0fa8ce9d5df262b9a1765725fde354a855de8aef3fc23684e05dd1ba8d34 3857f57776a3cb68721bcb7f1533a5f9fb416a1dc8824d719399b63a142d24de
hash_to_ec 09987979b0e98d1d5355df8a8698b8f54d3a037d12745c0a4317fe519c3df9cc 32a181e2b754aeced214c73ac459c97d99e63317be3eb923344c64a396173bca
hash_to_ec 51e9e8ec4413e92dbaaba067824c32b018487a8d16412ed310507b4741e18eed 0356b209156b4993fd5d5630308298429a1b0021c19bedecb7719ac607cfa644
hash_to_ec 14d91313dfe46e353310e6a4a23ee15d7a4e1f431700a444be8520e6043d08d9 6f345f4018b5d178d9f61894d9f46ac09ff639483727b0d113943507cee88cfd
hash_to_ec 0d5af9ace87382acfffb9ab1a34b6e921881aa015d4f6d9c73171b2b0a97600d a8dbf36c85bebe6a7b3733e70cd3cd9ed0eb282ca470f344e5fcf9fe959f2e6e
hash_to_ec 996690caac7328b19d20ed28eb0003d675b1a9ff79055ab530e3bf170eb22a94 14340d7d935cffce74b8b2f325c9d92ce0238b51807ef2c1512935bb843194ce
hash_to_ec ad839c4b4c278c8ebe16ff137a558255a1f74646aa87c6cd99e994c7bb97ce8a d4f2da327ffded913b50577be0e583db2b237b5ca74da648e9b985c247073b76
hash_to_ec 26fc2eeeee983e1300d72362fdff42edf08038e4eee277a6e2dbd1bd8c9d6560 3468b8269728c2c0bfc2e53b1575415124798bc0f59b60ea2f14967fc0ca19ce
hash_to_ec db33cecaf4ee6f0ceba338cc5fabfb7462cd952a9c9007357ff3f0ca8336f8bc 0bab38f58686d0ff770f770a297971510bc83e2ff2dfead34823d1c4d67f11af
hash_to_ec a0ee84b3c646526fb8787d26dcd9b7fe9dc713c8a6c1a4ea640465a9f36a64df 4d7a638f6759d3ec45339cd1300e1239cca5f0f658ca3cd29bc9bdb32f44faf0
hash_to_ec 6a702e7899fcf3988e2b6b55654c22e54f43d3fa29de19177bdff5b2295fe27f 145d5748d6054fb586568e276f6925aef593a5b9c8249ad3dbef510af99b4307
hash_to_ec 30ce0fd4f1fac8b62d613b8ee4a66deef6eb7094bd8466531050b837460f6971 f3aa850d593ba7cef01389f7e1916e57617f1d75cd42f64ce8f5f272384b148c
hash_to_ec 3aa31d4ad7046ad13d83eb11c9a6e90eb8483a374a77a9a7b2a7cc0978fefa76 2fe0827dc080d9c1e7ec475a78aa7ae3c86d1a35f4c3f25f4a1f7299cacf018a
hash_to_ec 8562a5a91e763b98014523ebb6e49120979098f89c31df1fde9eb3a49a15b20f ae223bf85e2009a9daf5fd8a14685e2e1e625fc88818b2fd437dd7e109a48f59
hash_to_ec ccf9c313a47b8dbf7ce42c94b785818bc24134d95b6d22acc53c1ec2be29cf27 3e79fce6fe5aa14251b6560df4b76e811d7739eec097f27052c4403a283be71d
hash_to_ec d1e33cd6f8918618d5fb6d67ad8de939db8beaec4f115551eac64479b739b773 613fffcbe1bf48bb2d7bfd64fd97790a06025f8f2429edddb9ac145707847ecf
hash_to_ec 81eaeced34dd44e448d5dafa5715225e4956c90911c964a96ff7aa5b86b969bc 8f81177495d120a1357380164d677509b167f2958eb8b962b616c3951d426d8c
hash_to_ec 2bc001a29f8eab1c7377de69957ba365fb5bdaf9c2c220889709af920dfe27d3 9bcb3010038f366fa4c280eed6e914a23bfc402594d0b83d0e66730a465a565b
hash_to_ec 6feeb703c05e86c58d9fc5623f1af8657ecd1e75a14d18c4eedb642a8a393d16 6544628ba67ed0e14854961739c4d467fcf49d6361e39d32ea73dabeae51e6c3
hash_to_ec e8ff145a7c26897f2c1639edd333a5412f87752f110079f581ccdc87fcce208c d4b5a6e06069c7e012e32119f8eda08ff04a8dfa784e1cf1bced455a4d41d905
hash_to_ec 80488131dcb2018527908dbf8cdf4b823ef0806dc1d360f4da671004ef7ff74d 9984a79d9fd4f317768b442161116eef84e2ca49e938642b268fd64312d59a27
hash_to_ec d8c4ca60446849a784d1462aa26a3b93073ff6841cb2da3ef52ab9785b00b1fd da5ec1562e7de2382d35728312f4eea3608d4dba775c1c108de510e1ce97d059
hash_to_ec 68645728dfc6b9358dfb426493238ba38f24a2f46a3e89edb47d212549939cb7 d3253aa7235113dcc1b577d3bb80be34f528398815a653dbdbacbcbdfd5887a1
hash_to_ec 4e8eb97ba2d1046e1b42e67530a61441e31c84e5e5e448d8e8dbe75d104eaccb de94f73e83222aa0e39b559d4fef70387b0815b9b2f6beff5da67262d8f0eb3e
hash_to_ec 104ff03122ffdf59b22b8c0fe3d8f2ef67d02328e4d5181916d3d2a92f9a0bb7 1517ccf69c0328327e1cf581f16944ff66bc91c37e1cd68a99525415e00b7c9f
hash_to_ec 80f23aae7356ae9a2f9f7504495a731214d26f870fb7df68fdc00b233494156f 7aef046b0a70f84e8d239aa95e192b5a3fffa0fae5090c91273e8996beca9e38
hash_to_ec 2424b33235955a737ebddbf1c6c59cd8778af74da3bd3e658447666a2ab2f557 d19e2be8d482950fbdae429618da7a9daedb8c5944dea19cd1b6b274e792231b
hash_to_ec 0adc839d2b8f099e4341a4763b074c06318d6bcbd1ec558d20a9820c4a426463 cea5da12a84e5c20011726d9224a9930bec30f9571762dd7ca857b86bd37d056
hash_to_ec 46c84d53951f1ba23c46a23d5d96bf019c559aa5d2d79e4535cfcdb36f38ce25 2a913a01a6f7dd78a43cdd5354d1160d9a5f0d824c489a892c80eba798a77567
hash_to_ec 99bdaaf68555ccdc93d97c3a0fb4c126a1aa8b1202194a1a753401a6cae21055 1f645efe173577a092f2d847cc966e28ba3b36397fe84c96dfa4724ed4fcfdf9
hash_to_ec c540ff78f1e063ad26ffa69febb8818c9f2a325072c566091ad816e40fe39af4 de7a762262c91ab4beccc0713233cb91163aec43e34de0dbcfad0c431e8a9722
hash_to_ec de8b1ff8978cd5e02681521542b7b6c3c2f8f4602065059f83594809d04e3dda 290601e75207085bff3e016746e55a80310a76dea9ef566c24181079c76da11c
hash_to_ec d555994c8a022e52602d2a8bdd01fc1bfa6b9ab6734ff72a1bd5f937de4627f8 5f6794e874f48c4b362d0a24207374c2d274e28de86351afc6ddb95d8cc2fd62
hash_to_ec 19db72f703fe6f1b73f21b6ba133ae6b111ae8cc496d3aa32e02411e34c0d8d7 42f159f43d2d62b8cf8a47d5f1340c5cf070e9860fc60de647c55d50fe9f5607
hash_to_ec 23a87a258c2a5d1353aa2d5946f9e5749b92f85e3c58e1d177c3b6c3dcac809c e5685016f79d5e87d1fecb3e2a0fe64e4875f7accd2f6649d7f6b16317549cb1
hash_to_ec 43e1738d7d1b5b565f5fc78e81480f7edf9a4dc18f104fc4be95135b98931b17 650f5b682e45f2d0c5d5e8bcfd9e0cda7d9071b55ecbfaf5e3b59941cd7479f2
hash_to_ec a9d644de0804edf62dee613efa2547e510990a9b7a987ebe55ec74c23873a878 52ad329f88499a4f110e6a6cba1f820012d8db6ccb8f6495ab1e3eb5a24786e1
hash_to_ec 11f2b5d89a0350d7c8727becf0f4dd19bd90f8c94ff207132ab13282dd9b94e6 b798a47bb98dc2a8f99deaf64d27638e33a0d504c5d2fbee477a2bc9b89e2838
hash_to_ec 5e206e3190b3b715d125f1a11fff424fb33e36e534c99ddde2a3517068b7dcc4 2738e9571c96b2ddf93cb5f4a72b1ea78d3731d9555b830494513c0683c950ca
hash_to_ec efc3d65a43d4f10795c7265a76671348f80173e0f507c812f7ae76793b99c529 cf4434d18ce8167b51f117fe930860143c46e1739a8db1fba73b6b0de830d707
hash_to_ec 81f00469788aad6631cf75b585ae06d43ec81c20479925a2009afac9687dff60 c335b5889b36ba4b4175bb0d986807e8eedb6f6b7329b70b922e2ab729c4202a
hash_to_ec 9ef5ff329b525ee8f5c3ac38e1dba7cb19985617341d356707c67ff273aed02d bef9f9e051ba0e24d1fdf72099cf43ecdd250d047fb329855b5372d5c422db9e
hash_to_ec 3fa1401bd63132cf8b385c0fa65f0715ba1fe6161e41d59f8033ae2b22f63fa1 8289a1cb3c2dae48879bb8913fafe2d196cc2fdab5f2a77607910efd33eae6df
hash_to_ec 6559836fd0081fa38a3f8d8408b564e5698b9797cf5e15f7f12a7d2c84511989 28d405a6687d2ecc90c1c66bf0454d58f3fa38835743075e1db58c658e15a104
hash_to_ec 8e0882d45f0e4c2fb2839d3be86ff699d4b2242f5b25ac5a3c2f65297c7d2032 2771fdcf9135a62007adb5f0004d8222f0e42f819c81710aa4dc3ab2042bebf3
hash_to_ec 1d91dc4dd9bd82646029d13aca1af96830c1d8a0400ddebeb14b00c93501c039 7792c62e897f32cbc9c4229f0d28f7882ceeae120329a1cd35f76a75ac704e93
hash_to_ec 09527f9052acbbdd7676cbbd9534780865f04a27aaadad2b7d4f1dac68883cf0 b934220cde1327f2dc6af67bcb4124bf424d5084ef4da945e4daad1717cd0bb8
hash_to_ec 2362e1abe73e64cdd2ca7f6c5ea9f467213747dd3f2b7c6e5df9cb21e03307d7 676b7122b96564358bbaaf77e3a5a4db1767e4f9a50f6ddd1c69df4566755af9
hash_to_ec 26c2dd2356e9b6c68a415b25f91d18614dc8500c66f346d28489da543ee75a94 0f4fd7086acd68eb7c9fa2410e2ecf18e34654eb44e979bc03ce436e992d5feb
hash_to_ec 422dc0a09d6a45a8e0b563eeb6a5ee84b08abd3a8cb34ff93f77ba3b163f4042 631f1b412ff5a0fccbe53a02b4a3deaa93a0418ed9874df401eb698ef75d7441
hash_to_ec ceecdf46f57ef3f36ff30a1a3579b609340282d1b26ab5ddef2f53514e91bab1 9bc6f981fe98d14a2fc5b01a8134b6d35e123ec9ab8a3f303e0a5abb28150e2e
hash_to_ec 024a9e6e0d73f28aa6207fb1e02ce86d444d2d46f8211e8aaab54f459db91a5a 5fb0c1d2c3b30f399102104ea1874099fa83110b3d9c1fcfffb2981c98bf8cdf
hash_to_ec 5b8e45e269c9ccac4c68e532a72b29346d218f4606f37a14064826a62050e3a8 c7be46a871b77fc05ce891d24bd6bd54d9775b7ef573c6bc2d92b67f3604c1d1
hash_to_ec 9a6593a385c266389eef14237874b97bdcd1823c3199311667d4853c2d12aa81 9f55ee9d94102d2b9c5670f30586cf9823bf205b4d4fe088c323e87c4e10f26f
hash_to_ec 27377e2811598c3569b92990865d39b72c7a5533e1be30f77330863187c11875 abd82bc726f2710a8b87e4c1cf5a069f0ae800de614468d3ff35639983020197
hash_to_ec 7cacfaa135fb7d568b8dce8ea9136498b1b28c6d1020af45d376288d78d411f0 229fccd49744c0692508af329224553d21561ee6062b2b8a21f080f73da5bd97
hash_to_ec 52abd90a5542d6496b8dec9567b020f30058e29458d64f2d4f3ad6f3bfc1a5a0 874e82ced7cf77577b3374087fb08a2300b7f403de628310c26bdb3be869d309
hash_to_ec 5c8eebe9d12309187afa8d0d5191de3fdb84e5a05485d7cd62e8804ce7fdc0bc 12b7537643488aa8b9dcc4bae040cd491f8b466163b7988157b0502fb6c9177f
hash_to_ec 6ca3dd5c7a21a6bf65d6eefbe20a66e9b1d6b64196344be0c075f47aea48e3aa 5e1d0705ee24675238293b73ab1d98359119d4b328275be2460cc6ee4d19cc88
hash_to_ec d7e6cd0d39b4308c2a5ee547c4569c8bb3887e49cedece62d218d7c3c5277797 793dc4397112dfd9a8f4e061f457eb6d6fbb1d7a58c40bad5f16002c64914186
hash_to_ec 9cb6de8ba967cca0f0f861c6e20546f8958446595c01c28dae7ba6cfa09d6b14 ba1a2f7502b58fee3499c20e35fa01bb932e7a7c4a925dc04fbf5d90f33cfb5e
hash_to_ec 8ef9c7366733a1edcd116238cdbd177d61222d5c3e05b30ef6b85014cbcb6b79 8fc89664722947164ac9b77086aed319897612068f56ecd57f47029f14671603
hash_to_ec 7f317a34e4fb7de9f69cb107ffc0e57fd9f5c85b85ccb5319d05cebfc169924a 4b71c42339c73db7d710cd63f374d478a6c13bdc352cff40e967282268965ba7
hash_to_ec 15beef8d9687b92918a903b01d594859db4e7128263c8db0cae9d423ff962c1e cd75e6323952f6ac88f138f391b69f38c46d70b7eda61f9e431725b6f1d514a5
hash_to_ec 7a1c04c9af8fc6649833fe81e96f0199fcfe94959256cbe1490075fc5be0904e 0368270cd979439ae0a9552a5d6c9f959e4247fcf920d9e071464582e79c04b1
hash_to_ec c854c583d338615f85f69061e0fa9c9d7c5bbbfe562e8774fef3be556fe8bb63 061620171d7320f64bee98414ff7200a1f481521d202fb281cab06be73b80402
hash_to_ec 0fb8af5aba05ad2503edf1cfad5a451da088e7e974772057cd991a4e0601a3eb d3cbc20384a4420143fcce2cb763b0c15bec4f3267d1bdad3c34c1ee6b790f5e
hash_to_ec 9a251cf59e84a9da5630642f9671c732440caa8fcf4c92446a7e5f5ef99da46c 9b9679086a433f2077f40bcd4c7545fb5cc87e7dbb8bba468d53cb04a74361a0
hash_to_ec 8c632e357cef00e0911eb566f8cc809136b3f5ac1e82d183e4d645cef89fa155 5e06b0f4f278fa1ccb5431866e0b35171cdb814e2e82b9189ce01d8d8a1b2408
hash_to_ec 4aa4c31463475086a5d96b3ff550340567ab3b4a86fa3f01cfe9be18bc4dcb54 76a2916cfc093f27992e1f07b50f431d61d58e255507e208cd29ea4d3bc56623
hash_to_ec 1d33d9aadb949346e3c78d065a0f5262374524f4cb97a7390c8cdaede7ca6578 9ad2f757f499359903031adea6126c577469c4e834a2959e3ac08ee74b13783c
hash_to_ec d9217b9a070df20c4d2f0db42ff0bb36bfba9f51b0b6df8fdfe150405dce4934 65a843c522b4b8ec081a696a0d2dd8dfdfea45db201de7a5889a1446c6dff8c7
hash_to_ec b665b2ca8a285e44ba84e785533b56496a5319730dbb95bc14d3bdfece7544dc 8a804cd13457497b0a29eeca2cecfaa858766ec1d270a0e0c6785b43fd49b824
hash_to_ec 43b5cbcc21b3404bca97fa9a661940fe64d40f3ca569310e50b1bb0173c4d5ee 6c12fffb540d536060bb8b96cf635c1b2cbaa4d875a8d2fb0bf79a690363df19
hash_to_ec 11c58f20562c00dec5bb4456be07cd98186837e9af38d50d45f5e7b6f0f9000d cee76b567586f66dadd38c01213bfc1a17d38e96a495efb4c26063dc498ba209
hash_to_ec b069a980b51d8e030262db0b30069e660f4a3f6f8075d1790c153ba12b879f8b 262391b00bdee71d1d827b2cfe50b46c29e265934dc91959bd369aca0cc6444e
hash_to_ec 75274bfd79bf33eb2f9ab046d34528af9a71811e7e3d55c20eb049c81ac692d8 cb93c850e36896fe6626e97c53652af6736ec3ba0641c7765d0cca2bad2352de
hash_to_ec 5cdb6a24d9736a00f197d9707949fedc5405f367744fe8c83b7cff650302b589 8b4ac03123fab9275dcf340345a1b11fba48ef106d410ba2e0e6f6457037a419
hash_to_ec 07fdc85f809f95a07b59b084402bf91c512ebbe05c7657d6ba27a9e7e121e3e2 61182b3def063630e11de648a278032bcb75949f3a24ef5a133da87830ae5c4e
hash_to_ec a4188ca634cbb796f9927822e343d7b267e0a609c1a0ffa4dcf3726b9ffcc8a2 a911e4899fda28fd6337d708d34553ac5e810ee4938f6f7d9d6e521cab069edb
hash_to_ec 3c128ec5c955ea189a5789df2c892e94193a534a9d5801b8f75df870bc492a69 59eef5ee9df0f681df5b5c67ead1f06b059a8a843837b67f20cce15779608170
hash_to_ec 51a4cc7ec4a14a98c0731e9de7f3ce0779123222d95455e940f2014a23729ec8 105863ccda076af7290d1bf9ec828651dc5811159839044d23f1c3e31a11c5e2
hash_to_ec 1b901a31acbb7807c3309facdc7d04bc3b5a4aa714e6e346bd1c6ad4634e6534 01b3c0000b6c6b471c67c6ab3f9c7a500beaea5edb5c8f2b34df91b69ff67f21
hash_to_ec d2f2c8d79cfa2e7cb2db80568ba62ca0576741acfbe5e2baa0d9b3c424a7c84d 7df9d9088022bd1ce6814d6f8051eef27a650ee38e789b184da2691efd27139d
hash_to_ec 04dcb7644fdfc12d8e34d6e57d7769db939b4a149ed2b81aa51a74ee90babe19 6cff0ab2dd3b32ba1bd1a78e3661722f3f10003a01ce83e430970557decedb2c
hash_to_ec 222798c6841eeaa07e7b7e29686942d7c7f9afc38d09360c8e1f52f2b7debd12 133e3a04ec82aa9b8dbbec18cadbafff446d1270bf7c6f3f97ddd3906dae2468
hash_to_ec 4f7277c3ef247a0689b486ad965f969c433fc63e95d7310e789c4708418ccabc 7e0f2c984dd3cffb35458938c95fe92acf2e697aed060b0e3377c7a07e53c494
hash_to_ec 359b4d6709413243ae2c5409ea02714a9f8961bbbb64a91e81daf01e18c981bf eab69af2cb7f113ad6a27035c0399853d10bd0b99291fad37794d100f7530431
hash_to_ec 6cea3c6a9eb38f60329537170aa4db8dbb869af2040061e53b10c267daf6568c da9a97f4fa96bd05dade5e2704a6a633ba4dbe5080a1e831cda888e9d4f86615
hash_to_ec 3dddecb954ef0209bcf61fd5b46b6c94f2384ef281c48a20ffee74f90788172d af9899c31f944617af54712f93d1a2b4944e48867f480d0d1aec61f3b713e32d
hash_to_ec 9605247462f50bdf7ff57fe966abbefe8b6efa0b65b5116252f0ec723717013f fc8f10904d42a74e09310ccf63db31a90f1dab88b278f15e3364a2356810f7e9
hash_to_ec a005143c4d299933f866db41d0a0b8c67264f5d4ea840dd243cb10c3526bc077 928df1fe9404ffa9c1f4a1c8b2d43ab9b81c5615c8330d2dc2074ac66d4d5200
hash_to_ec f45ce88065c34a163f8e77b6fb583502ed0eb1f490f63f76065a9d97e214e3a9 41bd6784270af4154f2f24f118617e2d7f5b7771a409f08b0f2b7bbcb5e3d666
hash_to_ec 7b40ac30ed02b12ff592a5479c80cf5a7673abfdd4dd38810e40e63275bc2eed 6c6bf5961d83851c9728801093d9af04e5a693bc6cbad237b9ac4b0ed580a771
hash_to_ec 9f985005794d3052a63361413a9820d2ce903198d6d5195b3f20a68f146c6d5c 88bcac53ba5b1c5b44730a24b4cc2cd782298fc70dc9d777b577a2b33b256449
hash_to_ec 31b8e37d01fd5669de4ebf78889d749bc44ffe997186ace56f1fb3e60b8742d2 776366b44170efb130a5045597db5675c6c0b56f3def84863c6b6358aa8dcf40

View File

@@ -0,0 +1,16 @@
use std_shims::io::{self, Write};
const VARINT_CONTINUATION_MASK: u8 = 0b1000_0000;
pub(crate) fn write_varint<W: Write>(varint: &u64, w: &mut W) -> io::Result<()> {
let mut varint = *varint;
while {
let mut b = u8::try_from(varint & u64::from(!VARINT_CONTINUATION_MASK)).unwrap();
varint >>= 7;
if varint != 0 {
b |= VARINT_CONTINUATION_MASK;
}
w.write_all(&[b])?;
varint != 0
} {}
Ok(())
}

View File

@@ -0,0 +1,321 @@
#[cfg(feature = "binaries")]
mod binaries {
pub(crate) use std::sync::Arc;
pub(crate) use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
pub(crate) use multiexp::BatchVerifier;
pub(crate) use serde::Deserialize;
pub(crate) use serde_json::json;
pub(crate) use monero_serai::{
Commitment,
ringct::RctPrunable,
transaction::{Input, Transaction},
block::Block,
rpc::{RpcError, Rpc, HttpRpc},
};
pub(crate) use monero_generators::decompress_point;
pub(crate) use tokio::task::JoinHandle;
pub(crate) async fn check_block(rpc: Arc<Rpc<HttpRpc>>, block_i: usize) {
let hash = loop {
match rpc.get_block_hash(block_i).await {
Ok(hash) => break hash,
Err(RpcError::ConnectionError(e)) => {
println!("get_block_hash ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i}'s hash: {e:?}"),
}
};
// TODO: Grab the JSON to also check it was deserialized correctly
#[derive(Deserialize, Debug)]
struct BlockResponse {
blob: String,
}
let res: BlockResponse = loop {
match rpc.json_rpc_call("get_block", Some(json!({ "hash": hex::encode(hash) }))).await {
Ok(res) => break res,
Err(RpcError::ConnectionError(e)) => {
println!("get_block ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't get block {block_i} via block.hash(): {e:?}"),
}
};
let blob = hex::decode(res.blob).expect("node returned non-hex block");
let block = Block::read(&mut blob.as_slice())
.unwrap_or_else(|e| panic!("couldn't deserialize block {block_i}: {e}"));
assert_eq!(block.hash(), hash, "hash differs");
assert_eq!(block.serialize(), blob, "serialization differs");
let txs_len = 1 + block.txs.len();
if !block.txs.is_empty() {
#[derive(Deserialize, Debug)]
struct TransactionResponse {
tx_hash: String,
as_hex: String,
}
#[derive(Deserialize, Debug)]
struct TransactionsResponse {
#[serde(default)]
missed_tx: Vec<String>,
txs: Vec<TransactionResponse>,
}
let mut hashes_hex = block.txs.iter().map(hex::encode).collect::<Vec<_>>();
let mut all_txs = vec![];
while !hashes_hex.is_empty() {
let txs: TransactionsResponse = loop {
match rpc
.rpc_call(
"get_transactions",
Some(json!({
"txs_hashes": hashes_hex.drain(.. hashes_hex.len().min(100)).collect::<Vec<_>>(),
})),
)
.await
{
Ok(txs) => break txs,
Err(RpcError::ConnectionError(e)) => {
println!("get_transactions ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't call get_transactions: {e:?}"),
}
};
assert!(txs.missed_tx.is_empty());
all_txs.extend(txs.txs);
}
let mut batch = BatchVerifier::new(block.txs.len());
for (tx_hash, tx_res) in block.txs.into_iter().zip(all_txs) {
assert_eq!(
tx_res.tx_hash,
hex::encode(tx_hash),
"node returned a transaction with different hash"
);
let tx = Transaction::read(
&mut hex::decode(&tx_res.as_hex).expect("node returned non-hex transaction").as_slice(),
)
.expect("couldn't deserialize transaction");
assert_eq!(
hex::encode(tx.serialize()),
tx_res.as_hex,
"Transaction serialization was different"
);
assert_eq!(tx.hash(), tx_hash, "Transaction hash was different");
if matches!(tx.rct_signatures.prunable, RctPrunable::Null) {
assert_eq!(tx.prefix.version, 1);
assert!(!tx.signatures.is_empty());
continue;
}
let sig_hash = tx.signature_hash();
// Verify all proofs we support proving for
// This is due to having debug_asserts calling verify within their proving, and CLSAG
// multisig explicitly calling verify as part of its signing process
// Accordingly, making sure our signature_hash algorithm is correct is great, and further
// making sure the verification functions are valid is appreciated
match tx.rct_signatures.prunable {
RctPrunable::Null |
RctPrunable::AggregateMlsagBorromean { .. } |
RctPrunable::MlsagBorromean { .. } => {}
RctPrunable::MlsagBulletproofs { bulletproofs, .. } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
}
RctPrunable::Clsag { bulletproofs, clsags, pseudo_outs } => {
assert!(bulletproofs.batch_verify(
&mut rand_core::OsRng,
&mut batch,
(),
&tx.rct_signatures.base.commitments
));
for (i, clsag) in clsags.into_iter().enumerate() {
let (amount, key_offsets, image) = match &tx.prefix.inputs[i] {
Input::Gen(_) => panic!("Input::Gen"),
Input::ToKey { amount, key_offsets, key_image } => (amount, key_offsets, key_image),
};
let mut running_sum = 0;
let mut actual_indexes = vec![];
for offset in key_offsets {
running_sum += offset;
actual_indexes.push(running_sum);
}
async fn get_outs(
rpc: &Rpc<HttpRpc>,
amount: u64,
indexes: &[u64],
) -> Vec<[EdwardsPoint; 2]> {
#[derive(Deserialize, Debug)]
struct Out {
key: String,
mask: String,
}
#[derive(Deserialize, Debug)]
struct Outs {
outs: Vec<Out>,
}
let outs: Outs = loop {
match rpc
.rpc_call(
"get_outs",
Some(json!({
"get_txid": true,
"outputs": indexes.iter().map(|o| json!({
"amount": amount,
"index": o
})).collect::<Vec<_>>()
})),
)
.await
{
Ok(outs) => break outs,
Err(RpcError::ConnectionError(e)) => {
println!("get_outs ConnectionError: {e}");
continue;
}
Err(e) => panic!("couldn't connect to RPC to get outs: {e:?}"),
}
};
let rpc_point = |point: &str| {
decompress_point(
hex::decode(point)
.expect("invalid hex for ring member")
.try_into()
.expect("invalid point len for ring member"),
)
.expect("invalid point for ring member")
};
outs
.outs
.iter()
.map(|out| {
let mask = rpc_point(&out.mask);
if amount != 0 {
assert_eq!(mask, Commitment::new(Scalar::from(1u8), amount).calculate());
}
[rpc_point(&out.key), mask]
})
.collect()
}
clsag
.verify(
&get_outs(&rpc, amount.unwrap_or(0), &actual_indexes).await,
image,
&pseudo_outs[i],
&sig_hash,
)
.unwrap();
}
}
}
}
assert!(batch.verify_vartime());
}
println!("Deserialized, hashed, and reserialized {block_i} with {txs_len} TXs");
}
}
#[cfg(feature = "binaries")]
#[tokio::main]
async fn main() {
use binaries::*;
let args = std::env::args().collect::<Vec<String>>();
// Read start block as the first arg
let mut block_i = args[1].parse::<usize>().expect("invalid start block");
// How many blocks to work on at once
let async_parallelism: usize =
args.get(2).unwrap_or(&"8".to_string()).parse::<usize>().expect("invalid parallelism argument");
// Read further args as RPC URLs
let default_nodes = vec![
"http://xmr-node.cakewallet.com:18081".to_string(),
"https://node.sethforprivacy.com".to_string(),
];
let mut specified_nodes = vec![];
{
let mut i = 0;
loop {
let Some(node) = args.get(3 + i) else { break };
specified_nodes.push(node.clone());
i += 1;
}
}
let nodes = if specified_nodes.is_empty() { default_nodes } else { specified_nodes };
let rpc = |url: String| async move {
HttpRpc::new(url.clone())
.await
.unwrap_or_else(|_| panic!("couldn't create HttpRpc connected to {url}"))
};
let main_rpc = rpc(nodes[0].clone()).await;
let mut rpcs = vec![];
for i in 0 .. async_parallelism {
rpcs.push(Arc::new(rpc(nodes[i % nodes.len()].clone()).await));
}
let mut rpc_i = 0;
let mut handles: Vec<JoinHandle<()>> = vec![];
let mut height = 0;
loop {
let new_height = main_rpc.get_height().await.expect("couldn't call get_height");
if new_height == height {
break;
}
height = new_height;
while block_i < height {
if handles.len() >= async_parallelism {
// Guarantee one handle is complete
handles.swap_remove(0).await.unwrap();
// Remove all of the finished handles
let mut i = 0;
while i < handles.len() {
if handles[i].is_finished() {
handles.swap_remove(i).await.unwrap();
continue;
}
i += 1;
}
}
handles.push(tokio::spawn(check_block(rpcs[rpc_i].clone(), block_i)));
rpc_i = (rpc_i + 1) % rpcs.len();
block_i += 1;
}
}
}
#[cfg(not(feature = "binaries"))]
fn main() {
panic!("To run binaries, please build with `--feature binaries`.");
}

View File

@@ -1,19 +1,31 @@
use crate::{
serialize::*,
transaction::Transaction
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
#[derive(Clone, PartialEq, Debug)]
use crate::{
hash,
merkle::merkle_root,
serialize::*,
transaction::{Input, Transaction},
};
const CORRECT_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("426d16cff04c71f8b16340b722dc4010a2dd3831c22041431f772547ba6e331a");
const EXISTING_BLOCK_HASH_202612: [u8; 32] =
hex_literal::hex!("bbd604d2ba11ba27935e006ed39c9bfdd99b76bf4a50654bc1e1e61217962698");
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BlockHeader {
pub major_version: u64,
pub minor_version: u64,
pub major_version: u8,
pub minor_version: u8,
pub timestamp: u64,
pub previous: [u8; 32],
pub nonce: u32
pub nonce: u32,
}
impl BlockHeader {
pub fn serialize<W: std::io::Write>(&self, w: &mut W) -> std::io::Result<()> {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_varint(&self.major_version, w)?;
write_varint(&self.minor_version, w)?;
write_varint(&self.timestamp, w)?;
@@ -21,46 +33,98 @@ impl BlockHeader {
w.write_all(&self.nonce.to_le_bytes())
}
pub fn deserialize<R: std::io::Read>(r: &mut R) -> std::io::Result<BlockHeader> {
Ok(
BlockHeader {
major_version: read_varint(r)?,
minor_version: read_varint(r)?,
timestamp: read_varint(r)?,
previous: { let mut previous = [0; 32]; r.read_exact(&mut previous)?; previous },
nonce: { let mut nonce = [0; 4]; r.read_exact(&mut nonce)?; u32::from_le_bytes(nonce) }
}
)
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<BlockHeader> {
Ok(BlockHeader {
major_version: read_varint(r)?,
minor_version: read_varint(r)?,
timestamp: read_varint(r)?,
previous: read_bytes(r)?,
nonce: read_bytes(r).map(u32::from_le_bytes)?,
})
}
}
#[derive(Clone, PartialEq, Debug)]
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct Block {
pub header: BlockHeader,
pub miner_tx: Transaction,
pub txs: Vec<[u8; 32]>
pub txs: Vec<[u8; 32]>,
}
impl Block {
pub fn serialize<W: std::io::Write>(&self, w: &mut W) -> std::io::Result<()> {
self.header.serialize(w)?;
self.miner_tx.serialize(w)?;
write_varint(&self.txs.len().try_into().unwrap(), w)?;
pub fn number(&self) -> Option<u64> {
match self.miner_tx.prefix.inputs.first() {
Some(Input::Gen(number)) => Some(*number),
_ => None,
}
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.header.write(w)?;
self.miner_tx.write(w)?;
write_varint(&self.txs.len(), w)?;
for tx in &self.txs {
w.write_all(tx)?;
}
Ok(())
}
pub fn deserialize<R: std::io::Read>(r: &mut R) -> std::io::Result<Block> {
Ok(
Block {
header: BlockHeader::deserialize(r)?,
miner_tx: Transaction::deserialize(r)?,
txs: (0 .. read_varint(r)?).map(
|_| { let mut tx = [0; 32]; r.read_exact(&mut tx).map(|_| tx) }
).collect::<Result<_, _>>()?
}
)
fn tx_merkle_root(&self) -> [u8; 32] {
merkle_root(self.miner_tx.hash(), &self.txs)
}
/// Serialize the block as required for the proof of work hash.
///
/// This is distinct from the serialization required for the block hash. To get the block hash,
/// use the [`Block::hash`] function.
pub fn serialize_hashable(&self) -> Vec<u8> {
let mut blob = self.header.serialize();
blob.extend_from_slice(&self.tx_merkle_root());
write_varint(&(1 + u64::try_from(self.txs.len()).unwrap()), &mut blob).unwrap();
blob
}
pub fn hash(&self) -> [u8; 32] {
let mut hashable = self.serialize_hashable();
// Monero pre-appends a VarInt of the block hashing blobs length before getting the block hash
// but doesn't do this when getting the proof of work hash :)
let mut hashing_blob = Vec::with_capacity(8 + hashable.len());
write_varint(&u64::try_from(hashable.len()).unwrap(), &mut hashing_blob).unwrap();
hashing_blob.append(&mut hashable);
let hash = hash(&hashing_blob);
if hash == CORRECT_BLOCK_HASH_202612 {
return EXISTING_BLOCK_HASH_202612;
};
hash
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Block> {
let header = BlockHeader::read(r)?;
let miner_tx = Transaction::read(r)?;
if !matches!(miner_tx.prefix.inputs.as_slice(), &[Input::Gen(_)]) {
Err(io::Error::other("Miner transaction has incorrect input type."))?;
}
Ok(Block {
header,
miner_tx,
txs: (0_usize .. read_varint(r)?).map(|_| read_bytes(r)).collect::<Result<_, _>>()?,
})
}
}

View File

@@ -1,76 +0,0 @@
use std::io::Read;
use thiserror::Error;
use rand_core::{RngCore, CryptoRng};
use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
use group::{Group, GroupEncoding};
use transcript::{Transcript, RecommendedTranscript};
use dalek_ff_group as dfg;
use dleq::{Generators, DLEqProof};
#[derive(Clone, Error, Debug)]
pub enum MultisigError {
#[error("internal error ({0})")]
InternalError(String),
#[error("invalid discrete log equality proof")]
InvalidDLEqProof(u16),
#[error("invalid key image {0}")]
InvalidKeyImage(u16)
}
fn transcript() -> RecommendedTranscript {
RecommendedTranscript::new(b"monero_key_image_dleq")
}
#[allow(non_snake_case)]
pub(crate) fn write_dleq<R: RngCore + CryptoRng>(
rng: &mut R,
H: EdwardsPoint,
x: Scalar
) -> Vec<u8> {
let mut res = Vec::with_capacity(64);
DLEqProof::prove(
rng,
// Doesn't take in a larger transcript object due to the usage of this
// Every prover would immediately write their own DLEq proof, when they can only do so in
// the proper order if they want to reach consensus
// It'd be a poor API to have CLSAG define a new transcript solely to pass here, just to try to
// merge later in some form, when it should instead just merge xH (as it does)
&mut transcript(),
Generators::new(dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(H)),
dfg::Scalar(x)
).serialize(&mut res).unwrap();
res
}
#[allow(non_snake_case)]
pub(crate) fn read_dleq<Re: Read>(
serialized: &mut Re,
H: EdwardsPoint,
l: u16,
xG: dfg::EdwardsPoint
) -> Result<dfg::EdwardsPoint, MultisigError> {
let mut bytes = [0; 32];
serialized.read_exact(&mut bytes).map_err(|_| MultisigError::InvalidDLEqProof(l))?;
// dfg ensures the point is torsion free
let xH = Option::<dfg::EdwardsPoint>::from(
dfg::EdwardsPoint::from_bytes(&bytes)).ok_or(MultisigError::InvalidDLEqProof(l)
)?;
// Ensure this is a canonical point
if xH.to_bytes() != bytes {
Err(MultisigError::InvalidDLEqProof(l))?;
}
DLEqProof::<dfg::EdwardsPoint>::deserialize(
serialized
).map_err(|_| MultisigError::InvalidDLEqProof(l))?.verify(
&mut transcript(),
Generators::new(dfg::EdwardsPoint::generator(), dfg::EdwardsPoint(H)),
(xG, xH)
).map_err(|_| MultisigError::InvalidDLEqProof(l))?;
Ok(xH)
}

View File

@@ -1,100 +1,229 @@
use std::slice;
#![cfg_attr(docsrs, feature(doc_auto_cfg))]
#![doc = include_str!("../README.md")]
#![cfg_attr(not(feature = "std"), no_std)]
#[cfg(not(feature = "std"))]
#[macro_use]
extern crate alloc;
use std_shims::{sync::OnceLock, io};
use lazy_static::lazy_static;
use rand_core::{RngCore, CryptoRng};
use subtle::ConstantTimeEq;
use zeroize::{Zeroize, ZeroizeOnDrop};
use tiny_keccak::{Hasher, Keccak};
use sha3::{Digest, Keccak256};
use curve25519_dalek::{
constants::ED25519_BASEPOINT_TABLE,
scalar::Scalar,
edwards::{EdwardsPoint, EdwardsBasepointTable, CompressedEdwardsY}
};
use curve25519_dalek::{constants::ED25519_BASEPOINT_TABLE, scalar::Scalar, edwards::EdwardsPoint};
#[cfg(feature = "multisig")]
pub mod frost;
pub use monero_generators::{H, decompress_point};
mod merkle;
mod serialize;
use serialize::{read_byte, read_u16};
/// UnreducedScalar struct with functionality for recovering incorrectly reduced scalars.
mod unreduced_scalar;
/// Ring Signature structs and functionality.
pub mod ring_signatures;
/// RingCT structs and functionality.
pub mod ringct;
use ringct::RctType;
/// Transaction structs.
pub mod transaction;
/// Block structs.
pub mod block;
/// Monero daemon RPC interface.
pub mod rpc;
/// Wallet functionality, enabling scanning and sending transactions.
pub mod wallet;
#[cfg(test)]
mod tests;
lazy_static! {
static ref H: EdwardsPoint = CompressedEdwardsY(
hex::decode("8b655970153799af2aeadc9ff1add0ea6c7251d54154cfa92c173a0dd39c1f94").unwrap().try_into().unwrap()
).decompress().unwrap();
static ref H_TABLE: EdwardsBasepointTable = EdwardsBasepointTable::create(&*H);
pub const DEFAULT_LOCK_WINDOW: usize = 10;
pub const COINBASE_LOCK_WINDOW: usize = 60;
pub const BLOCK_TIME: usize = 120;
static INV_EIGHT_CELL: OnceLock<Scalar> = OnceLock::new();
#[allow(non_snake_case)]
pub(crate) fn INV_EIGHT() -> Scalar {
*INV_EIGHT_CELL.get_or_init(|| Scalar::from(8u8).invert())
}
// Function from libsodium our subsection of Monero relies on. Implementing it here means we don't
// need to link against libsodium
#[no_mangle]
unsafe extern "C" fn crypto_verify_32(a: *const u8, b: *const u8) -> isize {
isize::from(
slice::from_raw_parts(a, 32).ct_eq(slice::from_raw_parts(b, 32)).unwrap_u8()
) - 1
/// Monero protocol version.
///
/// v15 is omitted as v15 was simply v14 and v16 being active at the same time, with regards to the
/// transactions supported. Accordingly, v16 should be used during v15.
#[derive(Clone, Copy, PartialEq, Eq, Debug, Zeroize)]
#[allow(non_camel_case_types)]
pub enum Protocol {
v14,
v16,
Custom {
ring_len: usize,
bp_plus: bool,
optimal_rct_type: RctType,
view_tags: bool,
v16_fee: bool,
},
}
// Offer a wide reduction to C. Our seeded RNG prevented Monero from defining an unbiased scalar
// generation function, and in order to not use Monero code (which would require propagating its
// license), the function was rewritten. It was rewritten with wide reduction, instead of rejection
// sampling however, hence the need for this function
#[no_mangle]
unsafe extern "C" fn monero_wide_reduce(value: *mut u8) {
let res = Scalar::from_bytes_mod_order_wide(
std::slice::from_raw_parts(value, 64).try_into().unwrap()
);
for (i, b) in res.to_bytes().iter().enumerate() {
value.add(i).write(*b);
impl Protocol {
/// Amount of ring members under this protocol version.
pub fn ring_len(&self) -> usize {
match self {
Protocol::v14 => 11,
Protocol::v16 => 16,
Protocol::Custom { ring_len, .. } => *ring_len,
}
}
/// Whether or not the specified version uses Bulletproofs or Bulletproofs+.
///
/// This method will likely be reworked when versions not using Bulletproofs at all are added.
pub fn bp_plus(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { bp_plus, .. } => *bp_plus,
}
}
// TODO: Make this an Option when we support pre-RCT protocols
pub fn optimal_rct_type(&self) -> RctType {
match self {
Protocol::v14 => RctType::Clsag,
Protocol::v16 => RctType::BulletproofsPlus,
Protocol::Custom { optimal_rct_type, .. } => *optimal_rct_type,
}
}
/// Whether or not the specified version uses view tags.
pub fn view_tags(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { view_tags, .. } => *view_tags,
}
}
/// Whether or not the specified version uses the fee algorithm from Monero
/// hard fork version 16 (released in v18 binaries).
pub fn v16_fee(&self) -> bool {
match self {
Protocol::v14 => false,
Protocol::v16 => true,
Protocol::Custom { v16_fee, .. } => *v16_fee,
}
}
pub(crate) fn write<W: io::Write>(&self, w: &mut W) -> io::Result<()> {
match self {
Protocol::v14 => w.write_all(&[0, 14]),
Protocol::v16 => w.write_all(&[0, 16]),
Protocol::Custom { ring_len, bp_plus, optimal_rct_type, view_tags, v16_fee } => {
// Custom, version 0
w.write_all(&[1, 0])?;
w.write_all(&u16::try_from(*ring_len).unwrap().to_le_bytes())?;
w.write_all(&[u8::from(*bp_plus)])?;
w.write_all(&[optimal_rct_type.to_byte()])?;
w.write_all(&[u8::from(*view_tags)])?;
w.write_all(&[u8::from(*v16_fee)])
}
}
}
pub(crate) fn read<R: io::Read>(r: &mut R) -> io::Result<Protocol> {
Ok(match read_byte(r)? {
// Monero protocol
0 => match read_byte(r)? {
14 => Protocol::v14,
16 => Protocol::v16,
_ => Err(io::Error::other("unrecognized monero protocol"))?,
},
// Custom
1 => match read_byte(r)? {
0 => Protocol::Custom {
ring_len: read_u16(r)?.into(),
bp_plus: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
optimal_rct_type: RctType::from_byte(read_byte(r)?)
.ok_or_else(|| io::Error::other("invalid RctType serialization"))?,
view_tags: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
v16_fee: match read_byte(r)? {
0 => false,
1 => true,
_ => Err(io::Error::other("invalid bool serialization"))?,
},
},
_ => Err(io::Error::other("unrecognized custom protocol serialization"))?,
},
_ => Err(io::Error::other("unrecognized protocol serialization"))?,
})
}
}
/// Transparent structure representing a Pedersen commitment's contents.
#[allow(non_snake_case)]
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
#[derive(Clone, PartialEq, Eq, Zeroize, ZeroizeOnDrop)]
pub struct Commitment {
pub mask: Scalar,
pub amount: u64
pub amount: u64,
}
impl core::fmt::Debug for Commitment {
fn fmt(&self, fmt: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
fmt.debug_struct("Commitment").field("amount", &self.amount).finish_non_exhaustive()
}
}
impl Commitment {
/// A commitment to zero, defined with a mask of 1 (as to not be the identity).
pub fn zero() -> Commitment {
Commitment { mask: Scalar::one(), amount: 0}
Commitment { mask: Scalar::ONE, amount: 0 }
}
pub fn new(mask: Scalar, amount: u64) -> Commitment {
Commitment { mask, amount }
}
/// Calculate a Pedersen commitment, as a point, from the transparent structure.
pub fn calculate(&self) -> EdwardsPoint {
(&self.mask * &ED25519_BASEPOINT_TABLE) + (&Scalar::from(self.amount) * &*H_TABLE)
(&self.mask * ED25519_BASEPOINT_TABLE) + (Scalar::from(self.amount) * H())
}
}
// Allows using a modern rand as dalek's is notoriously dated
/// Support generating a random scalar using a modern rand, as dalek's is notoriously dated.
pub fn random_scalar<R: RngCore + CryptoRng>(rng: &mut R) -> Scalar {
let mut r = [0; 64];
rng.fill_bytes(&mut r);
Scalar::from_bytes_mod_order_wide(&r)
}
pub fn hash(data: &[u8]) -> [u8; 32] {
let mut keccak = Keccak::v256();
keccak.update(data);
let mut res = [0; 32];
keccak.finalize(&mut res);
res
pub(crate) fn hash(data: &[u8]) -> [u8; 32] {
Keccak256::digest(data).into()
}
/// Hash the provided data to a scalar via keccak256(data) % l.
pub fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar::from_bytes_mod_order(hash(&data))
let scalar = Scalar::from_bytes_mod_order(hash(data));
// Monero will explicitly error in this case
// This library acknowledges its practical impossibility of it occurring, and doesn't bother to
// code in logic to handle it. That said, if it ever occurs, something must happen in order to
// not generate/verify a proof we believe to be valid when it isn't
assert!(scalar != Scalar::ZERO, "ZERO HASH: {data:?}");
scalar
}

View File

@@ -0,0 +1,55 @@
use std_shims::vec::Vec;
use crate::hash;
pub(crate) fn merkle_root(root: [u8; 32], leafs: &[[u8; 32]]) -> [u8; 32] {
match leafs.len() {
0 => root,
1 => hash(&[root, leafs[0]].concat()),
_ => {
let mut hashes = Vec::with_capacity(1 + leafs.len());
hashes.push(root);
hashes.extend(leafs);
// Monero preprocess this so the length is a power of 2
let mut high_pow_2 = 4; // 4 is the lowest value this can be
while high_pow_2 < hashes.len() {
high_pow_2 *= 2;
}
let low_pow_2 = high_pow_2 / 2;
// Merge right-most hashes until we're at the low_pow_2
{
let overage = hashes.len() - low_pow_2;
let mut rightmost = hashes.drain((low_pow_2 - overage) ..);
// This is true since we took overage from beneath and above low_pow_2, taking twice as
// many elements as overage
debug_assert_eq!(rightmost.len() % 2, 0);
let mut paired_hashes = Vec::with_capacity(overage);
while let Some(left) = rightmost.next() {
let right = rightmost.next().unwrap();
paired_hashes.push(hash(&[left.as_ref(), &right].concat()));
}
drop(rightmost);
hashes.extend(paired_hashes);
assert_eq!(hashes.len(), low_pow_2);
}
// Do a traditional pairing off
let mut new_hashes = Vec::with_capacity(hashes.len() / 2);
while hashes.len() > 1 {
let mut i = 0;
while i < hashes.len() {
new_hashes.push(hash(&[hashes[i], hashes[i + 1]].concat()));
i += 2;
}
hashes = new_hashes;
new_hashes = Vec::with_capacity(hashes.len() / 2);
}
hashes[0]
}
}
}

View File

@@ -0,0 +1,72 @@
use std_shims::{
io::{self, *},
vec::Vec,
};
use zeroize::Zeroize;
use curve25519_dalek::{EdwardsPoint, Scalar};
use monero_generators::hash_to_point;
use crate::{serialize::*, hash_to_scalar};
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct Signature {
c: Scalar,
r: Scalar,
}
impl Signature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
write_scalar(&self.c, w)?;
write_scalar(&self.r, w)?;
Ok(())
}
pub fn read<R: Read>(r: &mut R) -> io::Result<Signature> {
Ok(Signature { c: read_scalar(r)?, r: read_scalar(r)? })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct RingSignature {
sigs: Vec<Signature>,
}
impl RingSignature {
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for sig in &self.sigs {
sig.write(w)?;
}
Ok(())
}
pub fn read<R: Read>(members: usize, r: &mut R) -> io::Result<RingSignature> {
Ok(RingSignature { sigs: read_raw_vec(Signature::read, members, r)? })
}
pub fn verify(&self, msg: &[u8; 32], ring: &[EdwardsPoint], key_image: &EdwardsPoint) -> bool {
if ring.len() != self.sigs.len() {
return false;
}
let mut buf = Vec::with_capacity(32 + (32 * 2 * ring.len()));
buf.extend_from_slice(msg);
let mut sum = Scalar::ZERO;
for (ring_member, sig) in ring.iter().zip(&self.sigs) {
#[allow(non_snake_case)]
let Li = EdwardsPoint::vartime_double_scalar_mul_basepoint(&sig.c, ring_member, &sig.r);
buf.extend_from_slice(Li.compress().as_bytes());
#[allow(non_snake_case)]
let Ri = (sig.r * hash_to_point(ring_member.compress().to_bytes())) + (sig.c * key_image);
buf.extend_from_slice(Ri.compress().as_bytes());
sum += sig.c;
}
sum == hash_to_scalar(&buf)
}
}

View File

@@ -0,0 +1,97 @@
use core::fmt::Debug;
use std_shims::io::{self, Read, Write};
use curve25519_dalek::{traits::Identity, Scalar, EdwardsPoint};
use monero_generators::H_pow_2;
use crate::{hash_to_scalar, unreduced_scalar::UnreducedScalar, serialize::*};
/// 64 Borromean ring signatures.
///
/// s0 and s1 are stored as `UnreducedScalar`s due to Monero not requiring they were reduced.
/// `UnreducedScalar` preserves their original byte encoding and implements a custom reduction
/// algorithm which was in use.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanSignatures {
pub s0: [UnreducedScalar; 64],
pub s1: [UnreducedScalar; 64],
pub ee: Scalar,
}
impl BorromeanSignatures {
pub fn read<R: Read>(r: &mut R) -> io::Result<BorromeanSignatures> {
Ok(BorromeanSignatures {
s0: read_array(UnreducedScalar::read, r)?,
s1: read_array(UnreducedScalar::read, r)?,
ee: read_scalar(r)?,
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
for s0 in &self.s0 {
s0.write(w)?;
}
for s1 in &self.s1 {
s1.write(w)?;
}
write_scalar(&self.ee, w)
}
fn verify(&self, keys_a: &[EdwardsPoint], keys_b: &[EdwardsPoint]) -> bool {
let mut transcript = [0; 2048];
for i in 0 .. 64 {
#[allow(non_snake_case)]
let LL = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&self.ee,
&keys_a[i],
&self.s0[i].recover_monero_slide_scalar(),
);
#[allow(non_snake_case)]
let LV = EdwardsPoint::vartime_double_scalar_mul_basepoint(
&hash_to_scalar(LL.compress().as_bytes()),
&keys_b[i],
&self.s1[i].recover_monero_slide_scalar(),
);
transcript[(i * 32) .. ((i + 1) * 32)].copy_from_slice(LV.compress().as_bytes());
}
hash_to_scalar(&transcript) == self.ee
}
}
/// A range proof premised on Borromean ring signatures.
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct BorromeanRange {
pub sigs: BorromeanSignatures,
pub bit_commitments: [EdwardsPoint; 64],
}
impl BorromeanRange {
pub fn read<R: Read>(r: &mut R) -> io::Result<BorromeanRange> {
Ok(BorromeanRange {
sigs: BorromeanSignatures::read(r)?,
bit_commitments: read_array(read_point, r)?,
})
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.sigs.write(w)?;
write_raw_vec(write_point, &self.bit_commitments, w)
}
pub fn verify(&self, commitment: &EdwardsPoint) -> bool {
if &self.bit_commitments.iter().sum::<EdwardsPoint>() != commitment {
return false;
}
#[allow(non_snake_case)]
let H_pow_2 = H_pow_2();
let mut commitments_sub_one = [EdwardsPoint::identity(); 64];
for i in 0 .. 64 {
commitments_sub_one[i] = self.bit_commitments[i] - H_pow_2[i];
}
self.sigs.verify(&self.bit_commitments, &commitments_sub_one)
}
}

View File

@@ -1,161 +0,0 @@
#![allow(non_snake_case)]
use rand_core::{RngCore, CryptoRng};
use curve25519_dalek::{scalar::Scalar, edwards::EdwardsPoint};
use crate::{Commitment, wallet::TransactionError, serialize::*};
pub(crate) const MAX_OUTPUTS: usize = 16;
#[derive(Clone, PartialEq, Debug)]
pub struct Bulletproofs {
pub A: EdwardsPoint,
pub S: EdwardsPoint,
pub T1: EdwardsPoint,
pub T2: EdwardsPoint,
pub taux: Scalar,
pub mu: Scalar,
pub L: Vec<EdwardsPoint>,
pub R: Vec<EdwardsPoint>,
pub a: Scalar,
pub b: Scalar,
pub t: Scalar
}
impl Bulletproofs {
pub(crate) fn fee_weight(outputs: usize) -> usize {
let proofs = 6 + usize::try_from(usize::BITS - (outputs - 1).leading_zeros()).unwrap();
let len = (9 + (2 * proofs)) * 32;
let mut clawback = 0;
let padded = 1 << (proofs - 6);
if padded > 2 {
const BP_BASE: usize = 368;
clawback = ((BP_BASE * padded) - len) * 4 / 5;
}
len + clawback
}
pub fn new<R: RngCore + CryptoRng>(rng: &mut R, outputs: &[Commitment]) -> Result<Bulletproofs, TransactionError> {
if outputs.len() > MAX_OUTPUTS {
return Err(TransactionError::TooManyOutputs)?;
}
let mut seed = [0; 32];
rng.fill_bytes(&mut seed);
let masks = outputs.iter().map(|commitment| commitment.mask.to_bytes()).collect::<Vec<_>>();
let amounts = outputs.iter().map(|commitment| commitment.amount).collect::<Vec<_>>();
let res;
unsafe {
#[link(name = "wrapper")]
extern "C" {
fn free(ptr: *const u8);
fn c_generate_bp(seed: *const u8, len: u8, amounts: *const u64, masks: *const [u8; 32]) -> *const u8;
}
let ptr = c_generate_bp(
seed.as_ptr(),
u8::try_from(outputs.len()).unwrap(),
amounts.as_ptr(),
masks.as_ptr()
);
let mut len = 6 * 32;
len += (2 * (1 + (usize::from(ptr.add(len).read()) * 32))) + (3 * 32);
res = Bulletproofs::deserialize(
// Wrap in a cursor to provide a mutable Reader
&mut std::io::Cursor::new(std::slice::from_raw_parts(ptr, len))
).expect("Couldn't deserialize Bulletproofs from Monero");
free(ptr);
};
Ok(res)
}
#[must_use]
pub fn verify<R: RngCore + CryptoRng>(&self, rng: &mut R, commitments: &[EdwardsPoint]) -> bool {
if commitments.len() > 16 {
return false;
}
let mut seed = [0; 32];
rng.fill_bytes(&mut seed);
let mut serialized = Vec::with_capacity((9 + (2 * self.L.len())) * 32);
self.serialize(&mut serialized).unwrap();
let commitments: Vec<[u8; 32]> = commitments.iter().map(
|commitment| (commitment * Scalar::from(8u8).invert()).compress().to_bytes()
).collect();
unsafe {
#[link(name = "wrapper")]
extern "C" {
fn c_verify_bp(
seed: *const u8,
serialized_len: usize,
serialized: *const u8,
commitments_len: u8,
commitments: *const [u8; 32]
) -> bool;
}
c_verify_bp(
seed.as_ptr(),
serialized.len(),
serialized.as_ptr(),
u8::try_from(commitments.len()).unwrap(),
commitments.as_ptr()
)
}
}
fn serialize_core<
W: std::io::Write,
F: Fn(&[EdwardsPoint], &mut W) -> std::io::Result<()>
>(&self, w: &mut W, specific_write_vec: F) -> std::io::Result<()> {
write_point(&self.A, w)?;
write_point(&self.S, w)?;
write_point(&self.T1, w)?;
write_point(&self.T2, w)?;
write_scalar(&self.taux, w)?;
write_scalar(&self.mu, w)?;
specific_write_vec(&self.L, w)?;
specific_write_vec(&self.R, w)?;
write_scalar(&self.a, w)?;
write_scalar(&self.b, w)?;
write_scalar(&self.t, w)
}
pub fn signature_serialize<W: std::io::Write>(&self, w: &mut W) -> std::io::Result<()> {
self.serialize_core(w, |points, w| write_raw_vec(write_point, points, w))
}
pub fn serialize<W: std::io::Write>(&self, w: &mut W) -> std::io::Result<()> {
self.serialize_core(w, |points, w| write_vec(write_point, points, w))
}
pub fn deserialize<R: std::io::Read>(r: &mut R) -> std::io::Result<Bulletproofs> {
let bp = Bulletproofs {
A: read_point(r)?,
S: read_point(r)?,
T1: read_point(r)?,
T2: read_point(r)?,
taux: read_scalar(r)?,
mu: read_scalar(r)?,
L: read_vec(read_point, r)?,
R: read_vec(read_point, r)?,
a: read_scalar(r)?,
b: read_scalar(r)?,
t: read_scalar(r)?
};
if bp.L.len() != bp.R.len() {
Err(std::io::Error::new(std::io::ErrorKind::Other, "mismatched L/R len"))?;
}
Ok(bp)
}
}

View File

@@ -0,0 +1,151 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use subtle::{Choice, ConditionallySelectable};
use curve25519_dalek::edwards::EdwardsPoint as DalekPoint;
use group::{ff::Field, Group};
use dalek_ff_group::{Scalar, EdwardsPoint};
use multiexp::multiexp as multiexp_const;
pub(crate) use monero_generators::Generators;
use crate::{INV_EIGHT as DALEK_INV_EIGHT, H as DALEK_H, Commitment, hash_to_scalar as dalek_hash};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::*;
#[inline]
pub(crate) fn INV_EIGHT() -> Scalar {
Scalar(DALEK_INV_EIGHT())
}
#[inline]
pub(crate) fn H() -> EdwardsPoint {
EdwardsPoint(DALEK_H())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
// Components common between variants
pub(crate) const MAX_M: usize = 16;
pub(crate) const LOG_N: usize = 6; // 2 << 6 == N
pub(crate) const N: usize = 64;
pub(crate) fn prove_multiexp(pairs: &[(Scalar, EdwardsPoint)]) -> EdwardsPoint {
multiexp_const(pairs) * INV_EIGHT()
}
pub(crate) fn vector_exponent(
generators: &Generators,
a: &ScalarVector,
b: &ScalarVector,
) -> EdwardsPoint {
debug_assert_eq!(a.len(), b.len());
(a * &generators.G[.. a.len()]) + (b * &generators.H[.. b.len()])
}
pub(crate) fn hash_cache(cache: &mut Scalar, mash: &[[u8; 32]]) -> Scalar {
let slice =
&[cache.to_bytes().as_ref(), mash.iter().copied().flatten().collect::<Vec<_>>().as_ref()]
.concat();
*cache = hash_to_scalar(slice);
*cache
}
pub(crate) fn MN(outputs: usize) -> (usize, usize, usize) {
let mut logM = 0;
let mut M;
while {
M = 1 << logM;
(M <= MAX_M) && (M < outputs)
} {
logM += 1;
}
(logM + LOG_N, M, M * N)
}
pub(crate) fn bit_decompose(commitments: &[Commitment]) -> (ScalarVector, ScalarVector) {
let (_, M, MN) = MN(commitments.len());
let sv = commitments.iter().map(|c| Scalar::from(c.amount)).collect::<Vec<_>>();
let mut aL = ScalarVector::new(MN);
let mut aR = ScalarVector::new(MN);
for j in 0 .. M {
for i in (0 .. N).rev() {
let bit =
if j < sv.len() { Choice::from((sv[j][i / 8] >> (i % 8)) & 1) } else { Choice::from(0) };
aL.0[(j * N) + i] = Scalar::conditional_select(&Scalar::ZERO, &Scalar::ONE, bit);
aR.0[(j * N) + i] = Scalar::conditional_select(&-Scalar::ONE, &Scalar::ZERO, bit);
}
}
(aL, aR)
}
pub(crate) fn hash_commitments<C: IntoIterator<Item = DalekPoint>>(
commitments: C,
) -> (Scalar, Vec<EdwardsPoint>) {
let V = commitments.into_iter().map(|c| EdwardsPoint(c) * INV_EIGHT()).collect::<Vec<_>>();
(hash_to_scalar(&V.iter().flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>()), V)
}
pub(crate) fn alpha_rho<R: RngCore + CryptoRng>(
rng: &mut R,
generators: &Generators,
aL: &ScalarVector,
aR: &ScalarVector,
) -> (Scalar, EdwardsPoint) {
let ar = Scalar::random(rng);
(ar, (vector_exponent(generators, aL, aR) + (EdwardsPoint::generator() * ar)) * INV_EIGHT())
}
pub(crate) fn LR_statements(
a: &ScalarVector,
G_i: &[EdwardsPoint],
b: &ScalarVector,
H_i: &[EdwardsPoint],
cL: Scalar,
U: EdwardsPoint,
) -> Vec<(Scalar, EdwardsPoint)> {
let mut res = a
.0
.iter()
.copied()
.zip(G_i.iter().copied())
.chain(b.0.iter().copied().zip(H_i.iter().copied()))
.collect::<Vec<_>>();
res.push((cL, U));
res
}
static TWO_N_CELL: OnceLock<ScalarVector> = OnceLock::new();
pub(crate) fn TWO_N() -> &'static ScalarVector {
TWO_N_CELL.get_or_init(|| ScalarVector::powers(Scalar::from(2u8), N))
}
pub(crate) fn challenge_products(w: &[Scalar], winv: &[Scalar]) -> Vec<Scalar> {
let mut products = vec![Scalar::ZERO; 1 << w.len()];
products[0] = winv[0];
products[1] = w[0];
for j in 1 .. w.len() {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * w[j];
products[slots - 1] = products[slots / 2] * winv[j];
slots = slots.saturating_sub(2);
}
}
// Sanity check as if the above failed to populate, it'd be critical
for w in &products {
debug_assert!(!bool::from(w.is_zero()));
}
products
}

View File

@@ -0,0 +1,229 @@
#![allow(non_snake_case)]
use std_shims::{
vec::Vec,
io::{self, Read, Write},
};
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, Zeroizing};
use curve25519_dalek::edwards::EdwardsPoint;
use multiexp::BatchVerifier;
use crate::{Commitment, wallet::TransactionError, serialize::*};
pub(crate) mod scalar_vector;
pub(crate) mod core;
use self::core::LOG_N;
pub(crate) mod original;
use self::original::OriginalStruct;
pub(crate) mod plus;
use self::plus::*;
pub(crate) const MAX_OUTPUTS: usize = self::core::MAX_M;
/// Bulletproofs enum, supporting the original and plus formulations.
#[allow(clippy::large_enum_variant)]
#[derive(Clone, PartialEq, Eq, Debug)]
pub enum Bulletproofs {
Original(OriginalStruct),
Plus(AggregateRangeProof),
}
impl Bulletproofs {
fn bp_fields(plus: bool) -> usize {
if plus {
6
} else {
9
}
}
// https://github.com/monero-project/monero/blob/94e67bf96bbc010241f29ada6abc89f49a81759c/
// src/cryptonote_basic/cryptonote_format_utils.cpp#L106-L124
pub(crate) fn calculate_bp_clawback(plus: bool, n_outputs: usize) -> (usize, usize) {
#[allow(non_snake_case)]
let mut LR_len = 0;
let mut n_padded_outputs = 1;
while n_padded_outputs < n_outputs {
LR_len += 1;
n_padded_outputs = 1 << LR_len;
}
LR_len += LOG_N;
let mut bp_clawback = 0;
if n_padded_outputs > 2 {
let fields = Bulletproofs::bp_fields(plus);
let base = ((fields + (2 * (LOG_N + 1))) * 32) / 2;
let size = (fields + (2 * LR_len)) * 32;
bp_clawback = ((base * n_padded_outputs) - size) * 4 / 5;
}
(bp_clawback, LR_len)
}
pub(crate) fn fee_weight(plus: bool, outputs: usize) -> usize {
#[allow(non_snake_case)]
let (bp_clawback, LR_len) = Bulletproofs::calculate_bp_clawback(plus, outputs);
32 * (Bulletproofs::bp_fields(plus) + (2 * LR_len)) + 2 + bp_clawback
}
/// Prove the list of commitments are within [0 .. 2^64).
pub fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
outputs: &[Commitment],
plus: bool,
) -> Result<Bulletproofs, TransactionError> {
if outputs.is_empty() {
Err(TransactionError::NoOutputs)?;
}
if outputs.len() > MAX_OUTPUTS {
Err(TransactionError::TooManyOutputs)?;
}
Ok(if !plus {
Bulletproofs::Original(OriginalStruct::prove(rng, outputs))
} else {
use dalek_ff_group::EdwardsPoint as DfgPoint;
Bulletproofs::Plus(
AggregateRangeStatement::new(outputs.iter().map(|com| DfgPoint(com.calculate())).collect())
.unwrap()
.prove(rng, &Zeroizing::new(AggregateRangeWitness::new(outputs).unwrap()))
.unwrap(),
)
})
}
/// Verify the given Bulletproofs.
#[must_use]
pub fn verify<R: RngCore + CryptoRng>(&self, rng: &mut R, commitments: &[EdwardsPoint]) -> bool {
match self {
Bulletproofs::Original(bp) => bp.verify(rng, commitments),
Bulletproofs::Plus(bp) => {
let mut verifier = BatchVerifier::new(1);
// If this commitment is torsioned (which is allowed), this won't be a well-formed
// dfg::EdwardsPoint (expected to be of prime-order)
// The actual BP+ impl will perform a torsion clear though, making this safe
// TODO: Have AggregateRangeStatement take in dalek EdwardsPoint for clarity on this
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
if !statement.verify(rng, &mut verifier, (), bp.clone()) {
return false;
}
verifier.verify_vartime()
}
}
}
/// Accumulate the verification for the given Bulletproofs into the specified BatchVerifier.
/// Returns false if the Bulletproofs aren't sane, without mutating the BatchVerifier.
/// Returns true if the Bulletproofs are sane, regardless of their validity.
#[must_use]
pub fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, dalek_ff_group::EdwardsPoint>,
id: ID,
commitments: &[EdwardsPoint],
) -> bool {
match self {
Bulletproofs::Original(bp) => bp.batch_verify(rng, verifier, id, commitments),
Bulletproofs::Plus(bp) => {
let Some(statement) = AggregateRangeStatement::new(
commitments.iter().map(|c| dalek_ff_group::EdwardsPoint(*c)).collect(),
) else {
return false;
};
statement.verify(rng, verifier, id, bp.clone())
}
}
}
fn write_core<W: Write, F: Fn(&[EdwardsPoint], &mut W) -> io::Result<()>>(
&self,
w: &mut W,
specific_write_vec: F,
) -> io::Result<()> {
match self {
Bulletproofs::Original(bp) => {
write_point(&bp.A, w)?;
write_point(&bp.S, w)?;
write_point(&bp.T1, w)?;
write_point(&bp.T2, w)?;
write_scalar(&bp.taux, w)?;
write_scalar(&bp.mu, w)?;
specific_write_vec(&bp.L, w)?;
specific_write_vec(&bp.R, w)?;
write_scalar(&bp.a, w)?;
write_scalar(&bp.b, w)?;
write_scalar(&bp.t, w)
}
Bulletproofs::Plus(bp) => {
write_point(&bp.A.0, w)?;
write_point(&bp.wip.A.0, w)?;
write_point(&bp.wip.B.0, w)?;
write_scalar(&bp.wip.r_answer.0, w)?;
write_scalar(&bp.wip.s_answer.0, w)?;
write_scalar(&bp.wip.delta_answer.0, w)?;
specific_write_vec(&bp.wip.L.iter().copied().map(|L| L.0).collect::<Vec<_>>(), w)?;
specific_write_vec(&bp.wip.R.iter().copied().map(|R| R.0).collect::<Vec<_>>(), w)
}
}
}
pub(crate) fn signature_write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_raw_vec(write_point, points, w))
}
pub fn write<W: Write>(&self, w: &mut W) -> io::Result<()> {
self.write_core(w, |points, w| write_vec(write_point, points, w))
}
pub fn serialize(&self) -> Vec<u8> {
let mut serialized = vec![];
self.write(&mut serialized).unwrap();
serialized
}
/// Read Bulletproofs.
pub fn read<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
Ok(Bulletproofs::Original(OriginalStruct {
A: read_point(r)?,
S: read_point(r)?,
T1: read_point(r)?,
T2: read_point(r)?,
taux: read_scalar(r)?,
mu: read_scalar(r)?,
L: read_vec(read_point, r)?,
R: read_vec(read_point, r)?,
a: read_scalar(r)?,
b: read_scalar(r)?,
t: read_scalar(r)?,
}))
}
/// Read Bulletproofs+.
pub fn read_plus<R: Read>(r: &mut R) -> io::Result<Bulletproofs> {
use dalek_ff_group::{Scalar as DfgScalar, EdwardsPoint as DfgPoint};
Ok(Bulletproofs::Plus(AggregateRangeProof {
A: DfgPoint(read_point(r)?),
wip: WipProof {
A: DfgPoint(read_point(r)?),
B: DfgPoint(read_point(r)?),
r_answer: DfgScalar(read_scalar(r)?),
s_answer: DfgScalar(read_scalar(r)?),
delta_answer: DfgScalar(read_scalar(r)?),
L: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
R: read_vec(read_point, r)?.into_iter().map(DfgPoint).collect(),
},
}))
}
}

View File

@@ -0,0 +1,322 @@
use std_shims::{vec::Vec, sync::OnceLock};
use rand_core::{RngCore, CryptoRng};
use zeroize::Zeroize;
use curve25519_dalek::{scalar::Scalar as DalekScalar, edwards::EdwardsPoint as DalekPoint};
use group::{ff::Field, Group};
use dalek_ff_group::{ED25519_BASEPOINT_POINT as G, Scalar, EdwardsPoint};
use multiexp::{BatchVerifier, multiexp};
use crate::{Commitment, ringct::bulletproofs::core::*};
include!(concat!(env!("OUT_DIR"), "/generators.rs"));
static IP12_CELL: OnceLock<Scalar> = OnceLock::new();
pub(crate) fn IP12() -> Scalar {
*IP12_CELL.get_or_init(|| ScalarVector(vec![Scalar::ONE; N]).inner_product(TWO_N()))
}
pub(crate) fn hadamard_fold(
l: &[EdwardsPoint],
r: &[EdwardsPoint],
a: Scalar,
b: Scalar,
) -> Vec<EdwardsPoint> {
let mut res = Vec::with_capacity(l.len() / 2);
for i in 0 .. l.len() {
res.push(multiexp(&[(a, l[i]), (b, r[i])]));
}
res
}
#[derive(Clone, PartialEq, Eq, Debug)]
pub struct OriginalStruct {
pub(crate) A: DalekPoint,
pub(crate) S: DalekPoint,
pub(crate) T1: DalekPoint,
pub(crate) T2: DalekPoint,
pub(crate) taux: DalekScalar,
pub(crate) mu: DalekScalar,
pub(crate) L: Vec<DalekPoint>,
pub(crate) R: Vec<DalekPoint>,
pub(crate) a: DalekScalar,
pub(crate) b: DalekScalar,
pub(crate) t: DalekScalar,
}
impl OriginalStruct {
pub(crate) fn prove<R: RngCore + CryptoRng>(
rng: &mut R,
commitments: &[Commitment],
) -> OriginalStruct {
let (logMN, M, MN) = MN(commitments.len());
let (aL, aR) = bit_decompose(commitments);
let commitments_points = commitments.iter().map(Commitment::calculate).collect::<Vec<_>>();
let (mut cache, _) = hash_commitments(commitments_points.clone());
let (sL, sR) =
ScalarVector((0 .. (MN * 2)).map(|_| Scalar::random(&mut *rng)).collect::<Vec<_>>()).split();
let generators = GENERATORS();
let (mut alpha, A) = alpha_rho(&mut *rng, generators, &aL, &aR);
let (mut rho, S) = alpha_rho(&mut *rng, generators, &sL, &sR);
let y = hash_cache(&mut cache, &[A.compress().to_bytes(), S.compress().to_bytes()]);
let mut cache = hash_to_scalar(&y.to_bytes());
let z = cache;
let l0 = aL - z;
let l1 = sL;
let mut zero_twos = Vec::with_capacity(MN);
let zpow = ScalarVector::powers(z, M + 2);
for j in 0 .. M {
for i in 0 .. N {
zero_twos.push(zpow[j + 2] * TWO_N()[i]);
}
}
let yMN = ScalarVector::powers(y, MN);
let r0 = ((aR + z) * &yMN) + &ScalarVector(zero_twos);
let r1 = yMN * &sR;
let (T1, T2, x, mut taux) = {
let t1 = l0.clone().inner_product(&r1) + r0.clone().inner_product(&l1);
let t2 = l1.clone().inner_product(&r1);
let mut tau1 = Scalar::random(&mut *rng);
let mut tau2 = Scalar::random(&mut *rng);
let T1 = prove_multiexp(&[(t1, H()), (tau1, EdwardsPoint::generator())]);
let T2 = prove_multiexp(&[(t2, H()), (tau2, EdwardsPoint::generator())]);
let x =
hash_cache(&mut cache, &[z.to_bytes(), T1.compress().to_bytes(), T2.compress().to_bytes()]);
let taux = (tau2 * (x * x)) + (tau1 * x);
tau1.zeroize();
tau2.zeroize();
(T1, T2, x, taux)
};
let mu = (x * rho) + alpha;
alpha.zeroize();
rho.zeroize();
for (i, gamma) in commitments.iter().map(|c| Scalar(c.mask)).enumerate() {
taux += zpow[i + 2] * gamma;
}
let l = l0 + &(l1 * x);
let r = r0 + &(r1 * x);
let t = l.clone().inner_product(&r);
let x_ip =
hash_cache(&mut cache, &[x.to_bytes(), taux.to_bytes(), mu.to_bytes(), t.to_bytes()]);
let mut a = l;
let mut b = r;
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let mut G_proof = generators.G[.. a.len()].to_vec();
let mut H_proof = generators.H[.. a.len()].to_vec();
H_proof.iter_mut().zip(yinvpow.0.iter()).for_each(|(this_H, yinvpow)| *this_H *= yinvpow);
let U = H() * x_ip;
let mut L = Vec::with_capacity(logMN);
let mut R = Vec::with_capacity(logMN);
while a.len() != 1 {
let (aL, aR) = a.split();
let (bL, bR) = b.split();
let cL = aL.clone().inner_product(&bR);
let cR = aR.clone().inner_product(&bL);
let (G_L, G_R) = G_proof.split_at(aL.len());
let (H_L, H_R) = H_proof.split_at(aL.len());
let L_i = prove_multiexp(&LR_statements(&aL, G_R, &bR, H_L, cL, U));
let R_i = prove_multiexp(&LR_statements(&aR, G_L, &bL, H_R, cR, U));
L.push(L_i);
R.push(R_i);
let w = hash_cache(&mut cache, &[L_i.compress().to_bytes(), R_i.compress().to_bytes()]);
let winv = w.invert().unwrap();
a = (aL * w) + &(aR * winv);
b = (bL * winv) + &(bR * w);
if a.len() != 1 {
G_proof = hadamard_fold(G_L, G_R, winv, w);
H_proof = hadamard_fold(H_L, H_R, w, winv);
}
}
let res = OriginalStruct {
A: *A,
S: *S,
T1: *T1,
T2: *T2,
taux: *taux,
mu: *mu,
L: L.drain(..).map(|L| *L).collect(),
R: R.drain(..).map(|R| *R).collect(),
a: *a[0],
b: *b[0],
t: *t,
};
debug_assert!(res.verify(rng, &commitments_points));
res
}
#[must_use]
fn verify_core<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
// Verify commitments are valid
if commitments.is_empty() || (commitments.len() > MAX_M) {
return false;
}
// Verify L and R are properly sized
if self.L.len() != self.R.len() {
return false;
}
let (logMN, M, MN) = MN(commitments.len());
if self.L.len() != logMN {
return false;
}
// Rebuild all challenges
let (mut cache, commitments) = hash_commitments(commitments.iter().copied());
let y = hash_cache(&mut cache, &[self.A.compress().to_bytes(), self.S.compress().to_bytes()]);
let z = hash_to_scalar(&y.to_bytes());
cache = z;
let x = hash_cache(
&mut cache,
&[z.to_bytes(), self.T1.compress().to_bytes(), self.T2.compress().to_bytes()],
);
let x_ip = hash_cache(
&mut cache,
&[x.to_bytes(), self.taux.to_bytes(), self.mu.to_bytes(), self.t.to_bytes()],
);
let mut w = Vec::with_capacity(logMN);
let mut winv = Vec::with_capacity(logMN);
for (L, R) in self.L.iter().zip(&self.R) {
w.push(hash_cache(&mut cache, &[L.compress().to_bytes(), R.compress().to_bytes()]));
winv.push(cache.invert().unwrap());
}
// Convert the proof from * INV_EIGHT to its actual form
let normalize = |point: &DalekPoint| EdwardsPoint(point.mul_by_cofactor());
let L = self.L.iter().map(normalize).collect::<Vec<_>>();
let R = self.R.iter().map(normalize).collect::<Vec<_>>();
let T1 = normalize(&self.T1);
let T2 = normalize(&self.T2);
let A = normalize(&self.A);
let S = normalize(&self.S);
let commitments = commitments.iter().map(EdwardsPoint::mul_by_cofactor).collect::<Vec<_>>();
// Verify it
let mut proof = Vec::with_capacity(4 + commitments.len());
let zpow = ScalarVector::powers(z, M + 3);
let ip1y = ScalarVector::powers(y, M * N).sum();
let mut k = -(zpow[2] * ip1y);
for j in 1 ..= M {
k -= zpow[j + 2] * IP12();
}
let y1 = Scalar(self.t) - ((z * ip1y) + k);
proof.push((-y1, H()));
proof.push((-Scalar(self.taux), G));
for (j, commitment) in commitments.iter().enumerate() {
proof.push((zpow[j + 2], *commitment));
}
proof.push((x, T1));
proof.push((x * x, T2));
verifier.queue(&mut *rng, id, proof);
proof = Vec::with_capacity(4 + (2 * (MN + logMN)));
let z3 = (Scalar(self.t) - (Scalar(self.a) * Scalar(self.b))) * x_ip;
proof.push((z3, H()));
proof.push((-Scalar(self.mu), G));
proof.push((Scalar::ONE, A));
proof.push((x, S));
{
let ypow = ScalarVector::powers(y, MN);
let yinv = y.invert().unwrap();
let yinvpow = ScalarVector::powers(yinv, MN);
let w_cache = challenge_products(&w, &winv);
let generators = GENERATORS();
for i in 0 .. MN {
let g = (Scalar(self.a) * w_cache[i]) + z;
proof.push((-g, generators.G[i]));
let mut h = Scalar(self.b) * yinvpow[i] * w_cache[(!i) & (MN - 1)];
h -= ((zpow[(i / N) + 2] * TWO_N()[i % N]) + (z * ypow[i])) * yinvpow[i];
proof.push((-h, generators.H[i]));
}
}
for i in 0 .. logMN {
proof.push((w[i] * w[i], L[i]));
proof.push((winv[i] * winv[i], R[i]));
}
verifier.queue(rng, id, proof);
true
}
#[must_use]
pub(crate) fn verify<R: RngCore + CryptoRng>(
&self,
rng: &mut R,
commitments: &[DalekPoint],
) -> bool {
let mut verifier = BatchVerifier::new(1);
if self.verify_core(rng, &mut verifier, (), commitments) {
verifier.verify_vartime()
} else {
false
}
}
#[must_use]
pub(crate) fn batch_verify<ID: Copy + Zeroize, R: RngCore + CryptoRng>(
&self,
rng: &mut R,
verifier: &mut BatchVerifier<ID, EdwardsPoint>,
id: ID,
commitments: &[DalekPoint],
) -> bool {
self.verify_core(rng, verifier, id, commitments)
}
}

View File

@@ -0,0 +1,257 @@
use std_shims::vec::Vec;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop, Zeroizing};
use multiexp::{multiexp, multiexp_vartime, BatchVerifier};
use group::{
ff::{Field, PrimeField},
Group, GroupEncoding,
};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::{
Commitment,
ringct::{
bulletproofs::core::{MAX_M, N},
bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators,
transcript::*,
weighted_inner_product::{WipStatement, WipWitness, WipProof},
padded_pow_of_2, u64_decompose,
},
},
};
// Figure 3
#[derive(Clone, Debug)]
pub(crate) struct AggregateRangeStatement {
generators: Generators,
V: Vec<EdwardsPoint>,
}
impl Zeroize for AggregateRangeStatement {
fn zeroize(&mut self) {
self.V.zeroize();
}
}
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct AggregateRangeWitness {
values: Vec<u64>,
gammas: Vec<Scalar>,
}
impl AggregateRangeWitness {
pub(crate) fn new(commitments: &[Commitment]) -> Option<Self> {
if commitments.is_empty() || (commitments.len() > MAX_M) {
return None;
}
let mut values = Vec::with_capacity(commitments.len());
let mut gammas = Vec::with_capacity(commitments.len());
for commitment in commitments {
values.push(commitment.amount);
gammas.push(Scalar(commitment.mask));
}
Some(AggregateRangeWitness { values, gammas })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub struct AggregateRangeProof {
pub(crate) A: EdwardsPoint,
pub(crate) wip: WipProof,
}
impl AggregateRangeStatement {
pub(crate) fn new(V: Vec<EdwardsPoint>) -> Option<Self> {
if V.is_empty() || (V.len() > MAX_M) {
return None;
}
Some(Self { generators: Generators::new(), V })
}
fn transcript_A(transcript: &mut Scalar, A: EdwardsPoint) -> (Scalar, Scalar) {
let y = hash_to_scalar(&[transcript.to_repr().as_ref(), A.to_bytes().as_ref()].concat());
let z = hash_to_scalar(y.to_bytes().as_ref());
*transcript = z;
(y, z)
}
fn d_j(j: usize, m: usize) -> ScalarVector {
let mut d_j = Vec::with_capacity(m * N);
for _ in 0 .. (j - 1) * N {
d_j.push(Scalar::ZERO);
}
d_j.append(&mut ScalarVector::powers(Scalar::from(2u8), N).0);
for _ in 0 .. (m - j) * N {
d_j.push(Scalar::ZERO);
}
ScalarVector(d_j)
}
fn compute_A_hat(
mut V: PointVector,
generators: &Generators,
transcript: &mut Scalar,
mut A: EdwardsPoint,
) -> (Scalar, ScalarVector, Scalar, Scalar, ScalarVector, EdwardsPoint) {
let (y, z) = Self::transcript_A(transcript, A);
A = A.mul_by_cofactor();
while V.len() < padded_pow_of_2(V.len()) {
V.0.push(EdwardsPoint::identity());
}
let mn = V.len() * N;
let mut z_pow = Vec::with_capacity(V.len());
let mut d = ScalarVector::new(mn);
for j in 1 ..= V.len() {
z_pow.push(z.pow(Scalar::from(2 * u64::try_from(j).unwrap()))); // TODO: Optimize this
d = d + &(Self::d_j(j, V.len()) * (z_pow[j - 1]));
}
let mut ascending_y = ScalarVector(vec![y]);
for i in 1 .. d.len() {
ascending_y.0.push(ascending_y[i - 1] * y);
}
let y_pows = ascending_y.clone().sum();
let mut descending_y = ascending_y.clone();
descending_y.0.reverse();
let d_descending_y = d.clone() * &descending_y;
let d_descending_y_plus_z = d_descending_y + z;
let y_mn_plus_one = descending_y[0] * y;
let mut commitment_accum = EdwardsPoint::identity();
for (j, commitment) in V.0.iter().enumerate() {
commitment_accum += *commitment * z_pow[j];
}
let neg_z = -z;
let mut A_terms = Vec::with_capacity((generators.len() * 2) + 2);
for (i, d_y_z) in d_descending_y_plus_z.0.iter().enumerate() {
A_terms.push((neg_z, generators.generator(GeneratorsList::GBold1, i)));
A_terms.push((*d_y_z, generators.generator(GeneratorsList::HBold1, i)));
}
A_terms.push((y_mn_plus_one, commitment_accum));
A_terms.push((
((y_pows * z) - (d.sum() * y_mn_plus_one * z) - (y_pows * z.square())),
Generators::g(),
));
(
y,
d_descending_y_plus_z,
y_mn_plus_one,
z,
ScalarVector(z_pow),
A + multiexp_vartime(&A_terms),
)
}
pub(crate) fn prove<R: RngCore + CryptoRng>(
self,
rng: &mut R,
witness: &AggregateRangeWitness,
) -> Option<AggregateRangeProof> {
// Check for consistency with the witness
if self.V.len() != witness.values.len() {
return None;
}
for (commitment, (value, gamma)) in
self.V.iter().zip(witness.values.iter().zip(witness.gammas.iter()))
{
if Commitment::new(**gamma, *value).calculate() != **commitment {
return None;
}
}
let Self { generators, V } = self;
// Monero expects all of these points to be torsion-free
// Generally, for Bulletproofs, it sends points * INV_EIGHT and then performs a torsion clear
// by multiplying by 8
// This also restores the original value due to the preprocessing
// Commitments aren't transmitted INV_EIGHT though, so this multiplies by INV_EIGHT to enable
// clearing its cofactor without mutating the value
// For some reason, these values are transcripted * INV_EIGHT, not as transmitted
let mut V = V.into_iter().map(|V| EdwardsPoint(V.0 * crate::INV_EIGHT())).collect::<Vec<_>>();
let mut transcript = initial_transcript(V.iter());
V.iter_mut().for_each(|V| *V = V.mul_by_cofactor());
// Pad V
while V.len() < padded_pow_of_2(V.len()) {
V.push(EdwardsPoint::identity());
}
let generators = generators.reduce(V.len() * N);
let mut d_js = Vec::with_capacity(V.len());
let mut a_l = ScalarVector(Vec::with_capacity(V.len() * N));
for j in 1 ..= V.len() {
d_js.push(Self::d_j(j, V.len()));
a_l.0.append(&mut u64_decompose(*witness.values.get(j - 1).unwrap_or(&0)).0);
}
let a_r = a_l.clone() - Scalar::ONE;
let alpha = Scalar::random(&mut *rng);
let mut A_terms = Vec::with_capacity((generators.len() * 2) + 1);
for (i, a_l) in a_l.0.iter().enumerate() {
A_terms.push((*a_l, generators.generator(GeneratorsList::GBold1, i)));
}
for (i, a_r) in a_r.0.iter().enumerate() {
A_terms.push((*a_r, generators.generator(GeneratorsList::HBold1, i)));
}
A_terms.push((alpha, Generators::h()));
let mut A = multiexp(&A_terms);
A_terms.zeroize();
// Multiply by INV_EIGHT per earlier commentary
A.0 *= crate::INV_EIGHT();
let (y, d_descending_y_plus_z, y_mn_plus_one, z, z_pow, A_hat) =
Self::compute_A_hat(PointVector(V), &generators, &mut transcript, A);
let a_l = a_l - z;
let a_r = a_r + &d_descending_y_plus_z;
let mut alpha = alpha;
for j in 1 ..= witness.gammas.len() {
alpha += z_pow[j - 1] * witness.gammas[j - 1] * y_mn_plus_one;
}
Some(AggregateRangeProof {
A,
wip: WipStatement::new(generators, A_hat, y)
.prove(rng, transcript, &Zeroizing::new(WipWitness::new(a_l, a_r, alpha).unwrap()))
.unwrap(),
})
}
pub(crate) fn verify<Id: Copy + Zeroize, R: RngCore + CryptoRng>(
self,
rng: &mut R,
verifier: &mut BatchVerifier<Id, EdwardsPoint>,
id: Id,
proof: AggregateRangeProof,
) -> bool {
let Self { generators, V } = self;
let mut V = V.into_iter().map(|V| EdwardsPoint(V.0 * crate::INV_EIGHT())).collect::<Vec<_>>();
let mut transcript = initial_transcript(V.iter());
V.iter_mut().for_each(|V| *V = V.mul_by_cofactor());
let generators = generators.reduce(V.len() * N);
let (y, _, _, _, _, A_hat) =
Self::compute_A_hat(PointVector(V), &generators, &mut transcript, proof.A);
WipStatement::new(generators, A_hat, y).verify(rng, verifier, id, transcript, proof.wip)
}
}

View File

@@ -0,0 +1,85 @@
#![allow(non_snake_case)]
use group::Group;
use dalek_ff_group::{Scalar, EdwardsPoint};
pub(crate) use crate::ringct::bulletproofs::scalar_vector::ScalarVector;
mod point_vector;
pub(crate) use point_vector::PointVector;
pub(crate) mod transcript;
pub(crate) mod weighted_inner_product;
pub(crate) use weighted_inner_product::*;
pub(crate) mod aggregate_range_proof;
pub(crate) use aggregate_range_proof::*;
pub(crate) fn padded_pow_of_2(i: usize) -> usize {
let mut next_pow_of_2 = 1;
while next_pow_of_2 < i {
next_pow_of_2 <<= 1;
}
next_pow_of_2
}
#[derive(Clone, Copy, PartialEq, Eq, Hash, Debug)]
pub(crate) enum GeneratorsList {
GBold1,
HBold1,
}
// TODO: Table these
#[derive(Clone, Debug)]
pub(crate) struct Generators {
g_bold1: &'static [EdwardsPoint],
h_bold1: &'static [EdwardsPoint],
}
mod generators {
use std_shims::sync::OnceLock;
use monero_generators::Generators;
include!(concat!(env!("OUT_DIR"), "/generators_plus.rs"));
}
impl Generators {
#[allow(clippy::new_without_default)]
pub(crate) fn new() -> Self {
let gens = generators::GENERATORS();
Generators { g_bold1: &gens.G, h_bold1: &gens.H }
}
pub(crate) fn len(&self) -> usize {
self.g_bold1.len()
}
pub(crate) fn g() -> EdwardsPoint {
dalek_ff_group::EdwardsPoint(crate::H())
}
pub(crate) fn h() -> EdwardsPoint {
EdwardsPoint::generator()
}
pub(crate) fn generator(&self, list: GeneratorsList, i: usize) -> EdwardsPoint {
match list {
GeneratorsList::GBold1 => self.g_bold1[i],
GeneratorsList::HBold1 => self.h_bold1[i],
}
}
pub(crate) fn reduce(&self, generators: usize) -> Self {
// Round to the nearest power of 2
let generators = padded_pow_of_2(generators);
assert!(generators <= self.g_bold1.len());
Generators { g_bold1: &self.g_bold1[.. generators], h_bold1: &self.h_bold1[.. generators] }
}
}
// Returns the little-endian decomposition.
fn u64_decompose(value: u64) -> ScalarVector {
let mut bits = ScalarVector::new(64);
for bit in 0 .. 64 {
bits[bit] = Scalar::from((value >> bit) & 1);
}
bits
}

View File

@@ -0,0 +1,50 @@
use core::ops::{Index, IndexMut};
use std_shims::vec::Vec;
use zeroize::{Zeroize, ZeroizeOnDrop};
use dalek_ff_group::EdwardsPoint;
#[cfg(test)]
use multiexp::multiexp;
#[cfg(test)]
use crate::ringct::bulletproofs::plus::ScalarVector;
#[derive(Clone, PartialEq, Eq, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct PointVector(pub(crate) Vec<EdwardsPoint>);
impl Index<usize> for PointVector {
type Output = EdwardsPoint;
fn index(&self, index: usize) -> &EdwardsPoint {
&self.0[index]
}
}
impl IndexMut<usize> for PointVector {
fn index_mut(&mut self, index: usize) -> &mut EdwardsPoint {
&mut self.0[index]
}
}
impl PointVector {
#[cfg(test)]
pub(crate) fn multiexp(&self, vector: &ScalarVector) -> EdwardsPoint {
debug_assert_eq!(self.len(), vector.len());
let mut res = Vec::with_capacity(self.len());
for (point, scalar) in self.0.iter().copied().zip(vector.0.iter().copied()) {
res.push((scalar, point));
}
multiexp(&res)
}
pub(crate) fn len(&self) -> usize {
self.0.len()
}
pub(crate) fn split(mut self) -> (Self, Self) {
debug_assert!(self.len() > 1);
let r = self.0.split_off(self.0.len() / 2);
debug_assert_eq!(self.len(), r.len());
(self, PointVector(r))
}
}

View File

@@ -0,0 +1,24 @@
use std_shims::{sync::OnceLock, vec::Vec};
use dalek_ff_group::{Scalar, EdwardsPoint};
use monero_generators::{hash_to_point as raw_hash_to_point};
use crate::{hash, hash_to_scalar as dalek_hash};
// Monero starts BP+ transcripts with the following constant.
static TRANSCRIPT_CELL: OnceLock<[u8; 32]> = OnceLock::new();
pub(crate) fn TRANSCRIPT() -> [u8; 32] {
// Why this uses a hash_to_point is completely unknown.
*TRANSCRIPT_CELL
.get_or_init(|| raw_hash_to_point(hash(b"bulletproof_plus_transcript")).compress().to_bytes())
}
pub(crate) fn hash_to_scalar(data: &[u8]) -> Scalar {
Scalar(dalek_hash(data))
}
pub(crate) fn initial_transcript(commitments: core::slice::Iter<'_, EdwardsPoint>) -> Scalar {
let commitments_hash =
hash_to_scalar(&commitments.flat_map(|V| V.compress().to_bytes()).collect::<Vec<_>>());
hash_to_scalar(&[TRANSCRIPT().as_ref(), &commitments_hash.to_bytes()].concat())
}

View File

@@ -0,0 +1,444 @@
use std_shims::vec::Vec;
use rand_core::{RngCore, CryptoRng};
use zeroize::{Zeroize, ZeroizeOnDrop};
use multiexp::{BatchVerifier, multiexp, multiexp_vartime};
use group::{
ff::{Field, PrimeField},
GroupEncoding,
};
use dalek_ff_group::{Scalar, EdwardsPoint};
use crate::ringct::bulletproofs::plus::{
ScalarVector, PointVector, GeneratorsList, Generators, padded_pow_of_2, transcript::*,
};
// Figure 1
#[derive(Clone, Debug)]
pub(crate) struct WipStatement {
generators: Generators,
P: EdwardsPoint,
y: ScalarVector,
}
impl Zeroize for WipStatement {
fn zeroize(&mut self) {
self.P.zeroize();
self.y.zeroize();
}
}
#[derive(Clone, Debug, Zeroize, ZeroizeOnDrop)]
pub(crate) struct WipWitness {
a: ScalarVector,
b: ScalarVector,
alpha: Scalar,
}
impl WipWitness {
pub(crate) fn new(mut a: ScalarVector, mut b: ScalarVector, alpha: Scalar) -> Option<Self> {
if a.0.is_empty() || (a.len() != b.len()) {
return None;
}
// Pad to the nearest power of 2
let missing = padded_pow_of_2(a.len()) - a.len();
a.0.reserve(missing);
b.0.reserve(missing);
for _ in 0 .. missing {
a.0.push(Scalar::ZERO);
b.0.push(Scalar::ZERO);
}
Some(Self { a, b, alpha })
}
}
#[derive(Clone, PartialEq, Eq, Debug, Zeroize)]
pub(crate) struct WipProof {
pub(crate) L: Vec<EdwardsPoint>,
pub(crate) R: Vec<EdwardsPoint>,
pub(crate) A: EdwardsPoint,
pub(crate) B: EdwardsPoint,
pub(crate) r_answer: Scalar,
pub(crate) s_answer: Scalar,
pub(crate) delta_answer: Scalar,
}
impl WipStatement {
pub(crate) fn new(generators: Generators, P: EdwardsPoint, y: Scalar) -> Self {
debug_assert_eq!(generators.len(), padded_pow_of_2(generators.len()));
// y ** n
let mut y_vec = ScalarVector::new(generators.len());
y_vec[0] = y;
for i in 1 .. y_vec.len() {
y_vec[i] = y_vec[i - 1] * y;
}
Self { generators, P, y: y_vec }
}
fn transcript_L_R(transcript: &mut Scalar, L: EdwardsPoint, R: EdwardsPoint) -> Scalar {
let e = hash_to_scalar(
&[transcript.to_repr().as_ref(), L.to_bytes().as_ref(), R.to_bytes().as_ref()].concat(),
);
*transcript = e;
e
}
fn transcript_A_B(transcript: &mut Scalar, A: EdwardsPoint, B: EdwardsPoint) -> Scalar {
let e = hash_to_scalar(
&[transcript.to_repr().as_ref(), A.to_bytes().as_ref(), B.to_bytes().as_ref()].concat(),
);
*transcript = e;
e
}
// Prover's variant of the shared code block to calculate G/H/P when n > 1
// Returns each permutation of G/H since the prover needs to do operation on each permutation
// P is dropped as it's unused in the prover's path
// TODO: It'd still probably be faster to keep in terms of the original generators, both between
// the reduced amount of group operations and the potential tabling of the generators under
// multiexp
#[allow(clippy::too_many_arguments)]
fn next_G_H(
transcript: &mut Scalar,
mut g_bold1: PointVector,
mut g_bold2: PointVector,
mut h_bold1: PointVector,
mut h_bold2: PointVector,
L: EdwardsPoint,
R: EdwardsPoint,
y_inv_n_hat: Scalar,
) -> (Scalar, Scalar, Scalar, Scalar, PointVector, PointVector) {
debug_assert_eq!(g_bold1.len(), g_bold2.len());
debug_assert_eq!(h_bold1.len(), h_bold2.len());
debug_assert_eq!(g_bold1.len(), h_bold1.len());
let e = Self::transcript_L_R(transcript, L, R);
let inv_e = e.invert().unwrap();
// This vartime is safe as all of these arguments are public
let mut new_g_bold = Vec::with_capacity(g_bold1.len());
let e_y_inv = e * y_inv_n_hat;
for g_bold in g_bold1.0.drain(..).zip(g_bold2.0.drain(..)) {
new_g_bold.push(multiexp_vartime(&[(inv_e, g_bold.0), (e_y_inv, g_bold.1)]));
}
let mut new_h_bold = Vec::with_capacity(h_bold1.len());
for h_bold in h_bold1.0.drain(..).zip(h_bold2.0.drain(..)) {
new_h_bold.push(multiexp_vartime(&[(e, h_bold.0), (inv_e, h_bold.1)]));
}
let e_square = e.square();
let inv_e_square = inv_e.square();
(e, inv_e, e_square, inv_e_square, PointVector(new_g_bold), PointVector(new_h_bold))
}
/*
This has room for optimization worth investigating further. It currently takes
an iterative approach. It can be optimized further via divide and conquer.
Assume there are 4 challenges.
Iterative approach (current):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across that result and column 2.
3. Do the optimal multiplications across that result and column 3.
Divide and conquer (worth investigating further):
1. Do the optimal multiplications across challenge column 0 and 1.
2. Do the optimal multiplications across challenge column 2 and 3.
3. Multiply both results together.
When there are 4 challenges (n=16), the iterative approach does 28 multiplications
versus divide and conquer's 24.
*/
fn challenge_products(challenges: &[(Scalar, Scalar)]) -> Vec<Scalar> {
let mut products = vec![Scalar::ONE; 1 << challenges.len()];
if !challenges.is_empty() {
products[0] = challenges[0].1;
products[1] = challenges[0].0;
for (j, challenge) in challenges.iter().enumerate().skip(1) {
let mut slots = (1 << (j + 1)) - 1;
while slots > 0 {
products[slots] = products[slots / 2] * challenge.0;
products[slots - 1] = products[slots / 2] * challenge.1;
slots = slots.saturating_sub(2);
}
}
// Sanity check since if the above failed to populate, it'd be critical
for product in &products {
debug_assert!(!bool::from(product.is_zero()));
}
}
products
}
pub(crate) fn prove<R: RngCore + CryptoRng>(
self,
rng: &mut R,
mut transcript: Scalar,
witness: &WipWitness,
) -> Option<WipProof> {
let WipStatement { generators, P, mut y } = self;
#[cfg(not(debug_assertions))]
let _ = P;
if generators.len() != witness.a.len() {
return None;
}
let (g, h) = (Generators::g(), Generators::h());
let mut g_bold = vec![];
let mut h_bold = vec![];
for i in 0 .. generators.len() {
g_bold.push(generators.generator(GeneratorsList::GBold1, i));
h_bold.push(generators.generator(GeneratorsList::HBold1, i));
}
let mut g_bold = PointVector(g_bold);
let mut h_bold = PointVector(h_bold);
// Check P has the expected relationship
#[cfg(debug_assertions)]
{
let mut P_terms = witness
.a
.0
.iter()
.copied()
.zip(g_bold.0.iter().copied())
.chain(witness.b.0.iter().copied().zip(h_bold.0.iter().copied()))
.collect::<Vec<_>>();
P_terms.push((witness.a.clone().weighted_inner_product(&witness.b, &y), g));
P_terms.push((witness.alpha, h));
debug_assert_eq!(multiexp(&P_terms), P);
P_terms.zeroize();
}
let mut a = witness.a.clone();
let mut b = witness.b.clone();
let mut alpha = witness.alpha;
// From here on, g_bold.len() is used as n
debug_assert_eq!(g_bold.len(), a.len());
let mut L_vec = vec![];
let mut R_vec = vec![];
// else n > 1 case from figure 1
while g_bold.len() > 1 {
let (a1, a2) = a.clone().split();
let (b1, b2) = b.clone().split();
let (g_bold1, g_bold2) = g_bold.split();
let (h_bold1, h_bold2) = h_bold.split();
let n_hat = g_bold1.len();
debug_assert_eq!(a1.len(), n_hat);
debug_assert_eq!(a2.len(), n_hat);
debug_assert_eq!(b1.len(), n_hat);
debug_assert_eq!(b2.len(), n_hat);
debug_assert_eq!(g_bold1.len(), n_hat);
debug_assert_eq!(g_bold2.len(), n_hat);
debug_assert_eq!(h_bold1.len(), n_hat);
debug_assert_eq!(h_bold2.len(), n_hat);
let y_n_hat = y[n_hat - 1];
y.0.truncate(n_hat);
let d_l = Scalar::random(&mut *rng);
let d_r = Scalar::random(&mut *rng);
let c_l = a1.clone().weighted_inner_product(&b2, &y);
let c_r = (a2.clone() * y_n_hat).weighted_inner_product(&b1, &y);
// TODO: Calculate these with a batch inversion
let y_inv_n_hat = y_n_hat.invert().unwrap();
let mut L_terms = (a1.clone() * y_inv_n_hat)
.0
.drain(..)
.zip(g_bold2.0.iter().copied())
.chain(b2.0.iter().copied().zip(h_bold1.0.iter().copied()))
.collect::<Vec<_>>();
L_terms.push((c_l, g));
L_terms.push((d_l, h));
let L = multiexp(&L_terms) * Scalar(crate::INV_EIGHT());
L_vec.push(L);
L_terms.zeroize();
let mut R_terms = (a2.clone() * y_n_hat)
.0
.drain(..)
.zip(g_bold1.0.iter().copied())
.chain(b1.0.iter().copied().zip(h_bold2.0.iter().copied()))
.collect::<Vec<_>>();
R_terms.push((c_r, g));
R_terms.push((d_r, h));
let R = multiexp(&R_terms) * Scalar(crate::INV_EIGHT());
R_vec.push(R);
R_terms.zeroize();
let (e, inv_e, e_square, inv_e_square);
(e, inv_e, e_square, inv_e_square, g_bold, h_bold) =
Self::next_G_H(&mut transcript, g_bold1, g_bold2, h_bold1, h_bold2, L, R, y_inv_n_hat);
a = (a1 * e) + &(a2 * (y_n_hat * inv_e));
b = (b1 * inv_e) + &(b2 * e);
alpha += (d_l * e_square) + (d_r * inv_e_square);
debug_assert_eq!(g_bold.len(), a.len());
debug_assert_eq!(g_bold.len(), h_bold.len());
debug_assert_eq!(g_bold.len(), b.len());
}
// n == 1 case from figure 1
debug_assert_eq!(g_bold.len(), 1);
debug_assert_eq!(h_bold.len(), 1);
debug_assert_eq!(a.len(), 1);
debug_assert_eq!(b.len(), 1);
let r = Scalar::random(&mut *rng);
let s = Scalar::random(&mut *rng);
let delta = Scalar::random(&mut *rng);
let eta = Scalar::random(&mut *rng);
let ry = r * y[0];
let mut A_terms =
vec![(r, g_bold[0]), (s, h_bold[0]), ((ry * b[0]) + (s * y[0] * a[0]), g), (delta, h)];
let A = multiexp(&A_terms) * Scalar(crate::INV_EIGHT());
A_terms.zeroize();
let mut B_terms = vec![(ry * s, g), (eta, h)];
let B = multiexp(&B_terms) * Scalar(crate::INV_EIGHT());
B_terms.zeroize();
let e = Self::transcript_A_B(&mut transcript, A, B);
let r_answer = r + (a[0] * e);
let s_answer = s + (b[0] * e);
let delta_answer = eta + (delta * e) + (alpha * e.square());
Some(WipProof { L: L_vec, R: R_vec, A, B, r_answer, s_answer, delta_answer })
}
pub(crate) fn verify<Id: Copy + Zeroize, R: RngCore + CryptoRng>(
self,
rng: &mut R,
verifier: &mut BatchVerifier<Id, EdwardsPoint>,
id: Id,
mut transcript: Scalar,
mut proof: WipProof,
) -> bool {
let WipStatement { generators, P, y } = self;
let (g, h) = (Generators::g(), Generators::h());
// Verify the L/R lengths
{
let mut lr_len = 0;
while (1 << lr_len) < generators.len() {
lr_len += 1;
}
if (proof.L.len() != lr_len) ||
(proof.R.len() != lr_len) ||
(generators.len() != (1 << lr_len))
{
return false;
}
}
let inv_y = {
let inv_y = y[0].invert().unwrap();
let mut res = Vec::with_capacity(y.len());
res.push(inv_y);
while res.len() < y.len() {
res.push(inv_y * res.last().unwrap());
}
res
};
let mut P_terms = vec![(Scalar::ONE, P)];
P_terms.reserve(6 + (2 * generators.len()) + proof.L.len());
let mut challenges = Vec::with_capacity(proof.L.len());
let product_cache = {
let mut es = Vec::with_capacity(proof.L.len());
for (L, R) in proof.L.iter_mut().zip(proof.R.iter_mut()) {
es.push(Self::transcript_L_R(&mut transcript, *L, *R));
*L = L.mul_by_cofactor();
*R = R.mul_by_cofactor();
}
let mut inv_es = es.clone();
let mut scratch = vec![Scalar::ZERO; es.len()];
group::ff::BatchInverter::invert_with_external_scratch(&mut inv_es, &mut scratch);
drop(scratch);
debug_assert_eq!(es.len(), inv_es.len());
debug_assert_eq!(es.len(), proof.L.len());
debug_assert_eq!(es.len(), proof.R.len());
for ((e, inv_e), (L, R)) in
es.drain(..).zip(inv_es.drain(..)).zip(proof.L.iter().zip(proof.R.iter()))
{
debug_assert_eq!(e.invert().unwrap(), inv_e);
challenges.push((e, inv_e));
let e_square = e.square();
let inv_e_square = inv_e.square();
P_terms.push((e_square, *L));
P_terms.push((inv_e_square, *R));
}
Self::challenge_products(&challenges)
};
let e = Self::transcript_A_B(&mut transcript, proof.A, proof.B);
proof.A = proof.A.mul_by_cofactor();
proof.B = proof.B.mul_by_cofactor();
let neg_e_square = -e.square();
let mut multiexp = P_terms;
multiexp.reserve(4 + (2 * generators.len()));
for (scalar, _) in &mut multiexp {
*scalar *= neg_e_square;
}
let re = proof.r_answer * e;
for i in 0 .. generators.len() {
let mut scalar = product_cache[i] * re;
if i > 0 {
scalar *= inv_y[i - 1];
}
multiexp.push((scalar, generators.generator(GeneratorsList::GBold1, i)));
}
let se = proof.s_answer * e;
for i in 0 .. generators.len() {
multiexp.push((
se * product_cache[product_cache.len() - 1 - i],
generators.generator(GeneratorsList::HBold1, i),
));
}
multiexp.push((-e, proof.A));
multiexp.push((proof.r_answer * y[0] * proof.s_answer, g));
multiexp.push((proof.delta_answer, h));
multiexp.push((-Scalar::ONE, proof.B));
verifier.queue(rng, id, multiexp);
true
}
}

Some files were not shown because too many files have changed in this diff Show More