mirror of
https://github.com/serai-dex/serai.git
synced 2025-12-08 12:19:24 +00:00
Move docs to spec
This commit is contained in:
23
spec/DKG Exclusions.md
Normal file
23
spec/DKG Exclusions.md
Normal file
@@ -0,0 +1,23 @@
|
||||
Upon an issue with the DKG, the honest validators must remove the malicious
|
||||
validators. Ideally, a threshold signature would be used, yet that would require
|
||||
a threshold key (which would require authentication by a MuSig signature). A
|
||||
MuSig signature which specifies the signing set (or rather, the excluded
|
||||
signers) achieves the most efficiency.
|
||||
|
||||
While that resolves the on-chain behavior, the Tributary also has to perform
|
||||
exclusion. This has the following forms:
|
||||
|
||||
1) Rejecting further transactions (required)
|
||||
2) Rejecting further participation in Tendermint
|
||||
|
||||
With regards to rejecting further participation in Tendermint, it's *ideal* to
|
||||
remove the validator from the list of validators. Each validator removed from
|
||||
participation, yet not from the list of validators, increases the likelihood of
|
||||
the network failing to form consensus.
|
||||
|
||||
With regards to the economic security, an honest 67% may remove a faulty
|
||||
(explicitly or simply offline) 33%, letting 67% of the remaining 67% (4/9ths)
|
||||
take control of the associated private keys. In such a case, the malicious
|
||||
parties are defined as the 4/9ths of validators with access to the private key
|
||||
and the 33% removed (who together form >67% of the originally intended
|
||||
validator set and have presumably provided enough stake to cover losses).
|
||||
91
spec/Getting Started.md
Normal file
91
spec/Getting Started.md
Normal file
@@ -0,0 +1,91 @@
|
||||
# Getting Started
|
||||
|
||||
### Dependencies
|
||||
|
||||
##### Ubuntu
|
||||
|
||||
```
|
||||
sudo apt-get install -y build-essential clang-11 pkg-config cmake git curl protobuf-compiler
|
||||
```
|
||||
|
||||
### Install rustup
|
||||
|
||||
##### Linux
|
||||
|
||||
```
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
```
|
||||
|
||||
##### macOS
|
||||
|
||||
```
|
||||
brew install rustup
|
||||
```
|
||||
|
||||
### Install Rust
|
||||
|
||||
```
|
||||
rustup update
|
||||
rustup toolchain install stable
|
||||
rustup target add wasm32-unknown-unknown
|
||||
rustup toolchain install nightly
|
||||
rustup target add wasm32-unknown-unknown --toolchain nightly
|
||||
```
|
||||
|
||||
### Install Solidity
|
||||
|
||||
```
|
||||
cargo install svm-rs
|
||||
svm install 0.8.16
|
||||
svm use 0.8.16
|
||||
```
|
||||
|
||||
### Install Solidity Compiler Version Manager
|
||||
|
||||
```
|
||||
cargo install svm-rs
|
||||
svm install 0.8.16
|
||||
svm use 0.8.16
|
||||
```
|
||||
|
||||
### Install foundry (for tests)
|
||||
|
||||
```
|
||||
cargo install --git https://github.com/foundry-rs/foundry --profile local --locked forge cast chisel anvil
|
||||
```
|
||||
|
||||
### Clone and Build Serai
|
||||
|
||||
```
|
||||
git clone https://github.com/serai-dex/serai
|
||||
cd serai
|
||||
cargo build --release --all-features
|
||||
```
|
||||
|
||||
### Run Tests
|
||||
|
||||
Running tests requires:
|
||||
|
||||
- [A rootless Docker setup](https://docs.docker.com/engine/security/rootless/)
|
||||
- A properly configured Bitcoin regtest node (available via Docker)
|
||||
- A properly configured Monero regtest node (available via Docker)
|
||||
- A properly configured monero-wallet-rpc instance (available via Docker)
|
||||
|
||||
To start the required daemons, one may run:
|
||||
|
||||
```
|
||||
cargo run -p serai-orchestrator -- key_gen dev
|
||||
cargo run -p serai-orchestrator -- setup dev
|
||||
```
|
||||
|
||||
and then:
|
||||
|
||||
```
|
||||
cargo run -p serai-orchestrator -- start dev bitcoin-daemon monero-daemon monero-wallet-rpc
|
||||
```
|
||||
|
||||
Finally, to run the tests:
|
||||
|
||||
```
|
||||
cargo test --all-features
|
||||
```
|
||||
14
spec/Serai.md
Normal file
14
spec/Serai.md
Normal file
@@ -0,0 +1,14 @@
|
||||
# Serai
|
||||
|
||||
Serai is a decentralized execution layer whose validators form multisig wallets
|
||||
for various connected networks, offering secure decentralized control of foreign
|
||||
coins to applications built on it.
|
||||
|
||||
Serai is exemplified by Serai DEX, an automated-market-maker (AMM) decentralized
|
||||
exchange, allowing swapping Bitcoin, Ether, DAI, and Monero. It is the premier
|
||||
application of Serai.
|
||||
|
||||
### Substrate
|
||||
|
||||
Serai is based on [Substrate](https://docs.substrate.io), a blockchain framework
|
||||
offering a robust infrastructure.
|
||||
39
spec/coordinator/Coordinator.md
Normal file
39
spec/coordinator/Coordinator.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# Coordinator
|
||||
|
||||
The coordinator is a service which communicates with all of the processors,
|
||||
all of the other coordinators over a secondary P2P network, and with the Serai
|
||||
node.
|
||||
|
||||
This document primarily details its flow with regards to the Serai node and
|
||||
processor.
|
||||
|
||||
### New Set Event
|
||||
|
||||
On `validator_sets::pallet::Event::NewSet`, the coordinator spawns a tributary
|
||||
for the new set. It additionally sends the processor
|
||||
`key_gen::CoordinatorMessage::GenerateKey`.
|
||||
|
||||
### Key Generation Event
|
||||
|
||||
On `validator_sets::pallet::Event::KeyGen`, the coordinator sends
|
||||
`substrate::CoordinatorMessage::ConfirmKeyPair` to the processor.
|
||||
|
||||
### Batch
|
||||
|
||||
On `substrate::ProcessorMessage::Batch`, the coordinator notes what the on-chain
|
||||
`Batch` should be, for verification once published.
|
||||
|
||||
### SignedBatch
|
||||
|
||||
On `substrate::ProcessorMessage::SignedBatch`, the coordinator publishes an
|
||||
unsigned transaction containing the signed batch to the Serai blockchain.
|
||||
|
||||
### Sign Completed
|
||||
|
||||
On `sign::ProcessorMessage::Completed`, the coordinator makes a tributary
|
||||
transaction containing the transaction hash the signing process was supposedly
|
||||
completed with.
|
||||
|
||||
Due to rushing adversaries, the actual transaction completing the plan may be
|
||||
distinct on-chain. These messages solely exist to coordinate the signing
|
||||
process, not to determine chain state.
|
||||
130
spec/coordinator/Tributary.md
Normal file
130
spec/coordinator/Tributary.md
Normal file
@@ -0,0 +1,130 @@
|
||||
# Tributary
|
||||
|
||||
A tributary is a side-chain, created for a specific multisig instance, used
|
||||
as a verifiable broadcast layer.
|
||||
|
||||
## Transactions
|
||||
|
||||
### Key Gen Commitments
|
||||
|
||||
`DkgCommitments` is created when a processor sends the coordinator
|
||||
`key_gen::ProcessorMessage::Commitments`. When all validators participating in
|
||||
a multisig publish `DkgCommitments`, the coordinator sends the processor
|
||||
`key_gen::CoordinatorMessage::Commitments`, excluding the processor's own
|
||||
commitments.
|
||||
|
||||
### Key Gen Shares
|
||||
|
||||
`DkgShares` is created when a processor sends the coordinator
|
||||
`key_gen::ProcessorMessage::Shares`. The coordinator additionally includes its
|
||||
own pair of MuSig nonces, used in a signing protocol to inform Substrate of the
|
||||
key's successful creation.
|
||||
|
||||
When all validators participating in a multisig publish `DkgShares`, the
|
||||
coordinator sends the processor `key_gen::CoordinatorMessage::Shares`, excluding
|
||||
the processor's own shares and the MuSig nonces.
|
||||
|
||||
### Key Gen Confirmation
|
||||
|
||||
`DkgConfirmed` is created when a processor sends the coordinator
|
||||
`key_gen::ProcessorMessage::GeneratedKeyPair`. The coordinator takes the MuSig
|
||||
nonces they prior associated with this DKG attempt and publishes their signature
|
||||
share.
|
||||
|
||||
When all validators participating in the multisig publish `DkgConfirmed`, an
|
||||
extrinsic calling `validator_sets::pallet::set_keys` is made to confirm the
|
||||
keys.
|
||||
|
||||
Setting the keys on the Serai blockchain as such lets it receive `Batch`s,
|
||||
provides a BFT consensus guarantee, and enables accessibility by users. While
|
||||
the tributary itself could offer both the BFT consensus guarantee, and
|
||||
verifiable accessibility to users, they'd both require users access the
|
||||
tributary. Since Substrate must already know the resulting key, there's no value
|
||||
to usage of the tributary as-such, as all desired properties are already offered
|
||||
by Substrate.
|
||||
|
||||
Note that the keys are confirmed when Substrate emits a `KeyGen` event,
|
||||
regardless of if the Tributary has the expected `DkgConfirmed` transactions.
|
||||
|
||||
### Batch
|
||||
|
||||
When *TODO*, a `Batch` transaction is provided. This is used to have the group
|
||||
acknowledge and synchronize around a batch, without the overhead of voting in
|
||||
its acknowledgment.
|
||||
|
||||
When a `Batch` transaction is included, participants are allowed to publish
|
||||
transactions to produce a threshold signature for the batch synchronized over.
|
||||
|
||||
### Substrate Block
|
||||
|
||||
`SubstrateBlock` is provided when the processor sends the coordinator
|
||||
`substrate::ProcessorMessage::SubstrateBlockAck`.
|
||||
|
||||
When a `SubstrateBlock` transaction is included, participants are allowed to
|
||||
publish transactions for the signing protocols it causes.
|
||||
|
||||
### Batch Preprocess
|
||||
|
||||
`BatchPreprocess` is created when a processor sends the coordinator
|
||||
`coordinator::ProcessorMessage::BatchPreprocess` and an `Batch` transaction
|
||||
allowing the batch to be signed has already been included on chain.
|
||||
|
||||
When `t` validators have published `BatchPreprocess` transactions, if the
|
||||
coordinator represents one of the first `t` validators to do so, a
|
||||
`coordinator::ProcessorMessage::BatchPreprocesses` is sent to the processor,
|
||||
excluding the processor's own preprocess.
|
||||
|
||||
### Batch Share
|
||||
|
||||
`BatchShare` is created when a processor sends the coordinator
|
||||
`coordinator::ProcessorMessage::BatchShare`. The relevant `Batch`
|
||||
transaction having already been included on chain follows from
|
||||
`coordinator::ProcessorMessage::BatchShare` being a response to a message which
|
||||
also has that precondition.
|
||||
|
||||
When the `t` validators who first published `BatchPreprocess` transactions have
|
||||
published `BatchShare` transactions, if the coordinator represents one of the
|
||||
first `t` validators to do so, a `coordinator::ProcessorMessage::BatchShares`
|
||||
with the relevant shares (excluding the processor's own) is sent to the
|
||||
processor.
|
||||
|
||||
### Sign Preprocess
|
||||
|
||||
`SignPreprocess` is created when a processor sends the coordinator
|
||||
`sign::ProcessorMessage::Preprocess` and a `SubstrateBlock` transaction
|
||||
allowing the transaction to be signed has already been included on chain.
|
||||
|
||||
When `t` validators have published `SignPreprocess` transactions, if the
|
||||
coordinator represents one of the first `t` validators to do so, a
|
||||
`sign::ProcessorMessage::Preprocesses` is sent to the processor,
|
||||
excluding the processor's own preprocess.
|
||||
|
||||
### Sign Share
|
||||
|
||||
`SignShare` is created when a processor sends the coordinator
|
||||
`sign::ProcessorMessage::Share`. The relevant `SubstrateBlock` transaction
|
||||
having already been included on chain follows from
|
||||
`sign::ProcessorMessage::Share` being a response to a message which
|
||||
also has that precondition.
|
||||
|
||||
When the `t` validators who first published `SignPreprocess` transactions have
|
||||
published `SignShare` transactions, if the coordinator represents one of the
|
||||
first `t` validators to do so, a `sign::ProcessorMessage::Shares` with the
|
||||
relevant shares (excluding the processor's own) is sent to the processor.
|
||||
|
||||
### Sign Completed
|
||||
|
||||
`SignCompleted` is created when a processor sends the coordinator
|
||||
`sign::ProcessorMessage::Completed`. As soon as 34% of validators send
|
||||
`Completed`, the signing protocol is no longer further attempted.
|
||||
|
||||
## Re-attempts
|
||||
|
||||
Key generation protocols may fail if a validator is malicious. Signing
|
||||
protocols, whether batch or transaction, may fail if a validator goes offline or
|
||||
takes too long to respond. Accordingly, the tributary will schedule re-attempts.
|
||||
These are communicated with `key_gen::CoordinatorMessage::GenerateKey`,
|
||||
`coordinator::CoordinatorMessage::BatchReattempt`, and
|
||||
`sign::CoordinatorMessage::Reattempt`.
|
||||
|
||||
TODO: Document the re-attempt scheduling logic.
|
||||
35
spec/cryptography/Distributed Key Generation.md
Normal file
35
spec/cryptography/Distributed Key Generation.md
Normal file
@@ -0,0 +1,35 @@
|
||||
# Distributed Key Generation
|
||||
|
||||
Serai uses a modification of Pedersen's Distributed Key Generation, which is
|
||||
actually Feldman's Verifiable Secret Sharing Scheme run by every participant, as
|
||||
described in the FROST paper. The modification included in FROST was to include
|
||||
a Schnorr Proof of Knowledge for coefficient zero, preventing rogue key attacks.
|
||||
This results in a two-round protocol.
|
||||
|
||||
### Encryption
|
||||
|
||||
In order to protect the secret shares during communication, the `dkg` library
|
||||
establishes a public key for encryption at the start of a given protocol.
|
||||
Every encrypted message (such as the secret shares) then includes a per-message
|
||||
encryption key. These two keys are used in an Elliptic-curve Diffie-Hellman
|
||||
handshake to derive a shared key. This shared key is then hashed to obtain a key
|
||||
and IV for use in a ChaCha20 stream cipher instance, which is xor'd against a
|
||||
message to encrypt it.
|
||||
|
||||
### Blame
|
||||
|
||||
Since each message has a distinct key attached, and accordingly a distinct
|
||||
shared key, it's possible to reveal the shared key for a specific message
|
||||
without revealing any other message's decryption keys. This is utilized when a
|
||||
participant misbehaves. A participant who receives an invalid encrypted message
|
||||
publishes its key, able to without concern for side effects, With the key
|
||||
published, all participants can decrypt the message in order to decide blame.
|
||||
|
||||
While key reuse by a participant is considered as them revealing the messages
|
||||
themselves, and therefore out of scope, there is an attack where a malicious
|
||||
adversary claims another participant's encryption key. They'll fail to encrypt
|
||||
their message, and the recipient will issue a blame statement. This blame
|
||||
statement, intended to reveal the malicious adversary, also reveals the message
|
||||
by the participant whose keys were co-opted. To resolve this, a
|
||||
proof-of-possession is also included with encrypted messages, ensuring only
|
||||
those actually with per-message keys can claim to use them.
|
||||
62
spec/cryptography/FROST.md
Normal file
62
spec/cryptography/FROST.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# FROST
|
||||
|
||||
Serai implements [FROST](https://eprint.iacr.org/2020/852), as specified in
|
||||
[draft-irtf-cfrg-frost-11](https://datatracker.ietf.org/doc/draft-irtf-cfrg-frost/).
|
||||
|
||||
### Modularity
|
||||
|
||||
In order to support other algorithms which decompose to Schnorr, our FROST
|
||||
implementation is generic, able to run any algorithm satisfying its `Algorithm`
|
||||
trait. With these algorithms, there's frequently a requirement for further
|
||||
transcripting than what FROST expects. Accordingly, the transcript format is
|
||||
also modular so formats which aren't naive like the IETF's can be used.
|
||||
|
||||
### Extensions
|
||||
|
||||
In order to support algorithms which require their nonces be represented across
|
||||
multiple generators, FROST supports providing a nonce's commitments across
|
||||
multiple generators. In order to ensure their correctness, an extended
|
||||
[CP93's Discrete Log Equality Proof](https://chaum.com/wp-content/uploads/2021/12/Wallet_Databases.pdf)
|
||||
is used. The extension is simply to transcript `n` generators, instead of just
|
||||
two, enabling proving for all of them at once.
|
||||
|
||||
Since FROST nonces are binomial, every nonce would require two DLEq proofs. To
|
||||
make this more efficient, we hash their commitments to obtain a binding factor,
|
||||
before doing a single DLEq proof for `d + be`, similar to how FROST calculates
|
||||
its nonces (as well as MuSig's key aggregation).
|
||||
|
||||
As some algorithms require multiple nonces, effectively including multiple
|
||||
Schnorr signatures within one signature, the library also supports providing
|
||||
multiple nonces. The second component of a FROST nonce is intended to be
|
||||
multiplied by a per-participant binding factor to ensure the security of FROST.
|
||||
When additional nonces are used, this is actually a per-nonce per-participant
|
||||
binding factor.
|
||||
|
||||
When multiple nonces are used, with multiple generators, we use a single DLEq
|
||||
proof for all nonces, merging their challenges. This provides a proof of `1 + n`
|
||||
elements instead of `2n`.
|
||||
|
||||
Finally, to support additive offset signing schemes (accounts, stealth
|
||||
addresses, randomization), it's possible to specify a scalar offset for keys.
|
||||
The public key signed for is also offset by this value. During the signing
|
||||
process, the offset is explicitly transcripted. Then, the offset is added to the
|
||||
participant with the lowest ID.
|
||||
|
||||
# Caching
|
||||
|
||||
modular-frost supports caching a preprocess. This is done by having all
|
||||
preprocesses use a seeded RNG. Accordingly, the entire preprocess can be derived
|
||||
from the RNG seed, making the cache just the seed.
|
||||
|
||||
Reusing preprocesses would enable a third-party to recover your private key
|
||||
share. Accordingly, you MUST not reuse preprocesses. Third-party knowledge of
|
||||
your preprocess would also enable their recovery of your private key share.
|
||||
Accordingly, you MUST treat cached preprocesses with the same security as your
|
||||
private key share.
|
||||
|
||||
Since a reused seed will lead to a reused preprocess, seeded RNGs are generally
|
||||
frowned upon when doing multisignature operations. This isn't an issue as each
|
||||
new preprocess obtains a fresh seed from the specified RNG. Assuming the
|
||||
provided RNG isn't generating the same seed multiple times, the only way for
|
||||
this seeded RNG to fail is if a preprocess is loaded multiple times, which was
|
||||
already a failure point.
|
||||
23
spec/integrations/Bitcoin.md
Normal file
23
spec/integrations/Bitcoin.md
Normal file
@@ -0,0 +1,23 @@
|
||||
# Bitcoin
|
||||
|
||||
### Addresses
|
||||
|
||||
Bitcoin addresses are an enum, defined as follows:
|
||||
|
||||
- `p2pkh`: 20-byte hash.
|
||||
- `p2sh`: 20-byte hash.
|
||||
- `p2wpkh`: 20-byte hash.
|
||||
- `p2wsh`: 32-byte hash.
|
||||
- `p2tr`: 32-byte key.
|
||||
|
||||
### In Instructions
|
||||
|
||||
Bitcoin In Instructions are present via the transaction's last output in the
|
||||
form of `OP_RETURN`, and accordingly limited to 80 bytes. `origin` is
|
||||
automatically set to the transaction's first input's address, if recognized.
|
||||
If it's not recognized, an address of the multisig's current Bitcoin address is
|
||||
used, causing any failure to become a donation.
|
||||
|
||||
### Out Instructions
|
||||
|
||||
Out Instructions ignore `data`.
|
||||
38
spec/integrations/Ethereum.md
Normal file
38
spec/integrations/Ethereum.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# Ethereum
|
||||
|
||||
### Addresses
|
||||
|
||||
Ethereum addresses are 20-byte hashes.
|
||||
|
||||
### In Instructions
|
||||
|
||||
Ethereum In Instructions are present via being appended to the calldata
|
||||
transferring funds to Serai. `origin` is automatically set to the party from
|
||||
which funds are being transferred. For an ERC20, this is `from`. For ETH, this
|
||||
is the caller.
|
||||
|
||||
### Out Instructions
|
||||
|
||||
`data` is limited to 512 bytes.
|
||||
|
||||
If `data` is provided, the Ethereum Router will call a contract-calling child
|
||||
contract in order to sandbox it. The first byte of `data` designates which child
|
||||
child contract to call. After this byte is read, `data` is solely considered as
|
||||
`data`, post its first byte. The child contract is sent the funds before this
|
||||
call is performed.
|
||||
|
||||
##### Child Contract 0
|
||||
|
||||
This contract is intended to enable connecting with other protocols, and should
|
||||
be used to convert withdrawn assets to other assets on Ethereum.
|
||||
|
||||
1) Transfers the asset to `destination`.
|
||||
2) Calls `destination` with `data`.
|
||||
|
||||
##### Child Contract 1
|
||||
|
||||
This contract is intended to enable authenticated calls from Serai.
|
||||
|
||||
1) Transfers the asset to `destination`.
|
||||
2) Calls `destination` with `data[.. 4], serai_address, data[4 ..]`, where
|
||||
`serai_address` is the address which triggered this Out Instruction.
|
||||
123
spec/integrations/Instructions.md
Normal file
123
spec/integrations/Instructions.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# Instructions
|
||||
|
||||
Instructions are used to communicate with networks connected to Serai, and they
|
||||
come in two forms:
|
||||
|
||||
- In Instructions are programmable specifications paired with incoming coins,
|
||||
encoded into transactions on connected networks. Serai will parse included
|
||||
instructions when it receives coins, executing the included specs.
|
||||
|
||||
- Out Instructions detail how to transfer coins, either to a Serai address or
|
||||
an address native to the network of the coins in question.
|
||||
|
||||
A transaction containing an In Instruction and an Out Instruction (to a native
|
||||
address) will receive coins to Serai and send coins from Serai, without
|
||||
requiring directly performing any transactions on Serai itself.
|
||||
|
||||
All instructions are encoded under [Shorthand](#shorthand). Shorthand provides
|
||||
frequent use cases to create minimal data representations on connected networks.
|
||||
|
||||
Instructions are interpreted according to their non-Serai network. Addresses
|
||||
have no validation performed unless otherwise noted. If the processor is
|
||||
instructed to act on invalid data, it will drop the entire instruction.
|
||||
|
||||
### Serialization
|
||||
|
||||
Instructions are SCALE encoded.
|
||||
|
||||
### In Instruction
|
||||
|
||||
InInstruction is an enum of:
|
||||
|
||||
- `Transfer`
|
||||
- `Dex(Data)`
|
||||
|
||||
The specified target will be minted an appropriate amount of the respective
|
||||
Serai token. If `Dex`, the encoded call will be executed.
|
||||
|
||||
### Refundable In Instruction
|
||||
|
||||
- `origin` (Option\<ExternalAddress>): Address, from the network of
|
||||
origin, which sent coins in.
|
||||
- `instruction` (InInstruction): The action to perform with the
|
||||
incoming coins.
|
||||
|
||||
Networks may automatically provide `origin`. If they do, the instruction may
|
||||
still provide `origin`, overriding the automatically provided value.
|
||||
|
||||
If the instruction fails, coins are scheduled to be returned to `origin`,
|
||||
if provided.
|
||||
|
||||
### Out Instruction
|
||||
|
||||
- `address` (ExternalAddress): Address to transfer the coins included with
|
||||
this instruction to.
|
||||
- `data` (Option<Data>): Data to include when transferring coins.
|
||||
|
||||
No validation of external addresses/data is performed on-chain. If data is
|
||||
specified for a chain not supporting data, it is silently dropped.
|
||||
|
||||
### Destination
|
||||
|
||||
Destination is an enum of SeraiAddress and OutInstruction.
|
||||
|
||||
### Shorthand
|
||||
|
||||
Shorthand is an enum which expands to an Refundable In Instruction.
|
||||
|
||||
##### Raw
|
||||
|
||||
Raw Shorthand contains a Refundable In Instruction directly. This is a verbose
|
||||
fallback option for infrequent use cases not covered by Shorthand.
|
||||
|
||||
##### Swap
|
||||
|
||||
- `origin` (Option\<ExternalAddress>): Refundable In Instruction's `origin`.
|
||||
- `coin` (Coin): Coin to swap funds for.
|
||||
- `minimum` (Amount): Minimum amount of `coin` to receive.
|
||||
- `out` (Destination): Final destination for funds.
|
||||
|
||||
which expands to:
|
||||
|
||||
```
|
||||
RefundableInInstruction {
|
||||
origin,
|
||||
instruction: InInstruction::Dex(swap(Incoming Asset, coin, minimum, out)),
|
||||
}
|
||||
```
|
||||
|
||||
where `swap` is a function which:
|
||||
|
||||
1) Swaps the incoming funds for SRI.
|
||||
2) Swaps the SRI for `coin`.
|
||||
3) Checks the amount of `coin` received is greater than `minimum`.
|
||||
4) Executes `out` with the amount of `coin` received.
|
||||
|
||||
##### Add Liquidity
|
||||
|
||||
- `origin` (Option\<ExternalAddress>): Refundable In Instruction's `origin`.
|
||||
- `minimum` (Amount): Minimum amount of SRI tokens to swap
|
||||
half for.
|
||||
- `gas` (Amount): Amount of SRI to send to `address` to
|
||||
cover gas in the future.
|
||||
- `address` (Address): Account to send the created liquidity
|
||||
tokens.
|
||||
|
||||
which expands to:
|
||||
|
||||
```
|
||||
RefundableInInstruction {
|
||||
origin,
|
||||
instruction: InInstruction::Dex(
|
||||
swap_and_add_liquidity(Incoming Asset, minimum, gas, address)
|
||||
),
|
||||
}
|
||||
```
|
||||
|
||||
where `swap_and_add_liquidity` is a function which:
|
||||
|
||||
1) Swaps half of the incoming funds for SRI.
|
||||
2) Checks the amount of SRI received is greater than `minimum`.
|
||||
3) Calls `swap_and_add_liquidity` with the amount of SRI received - `gas`, and
|
||||
a matching amount of the incoming coin.
|
||||
4) Transfers any leftover funds to `address`.
|
||||
37
spec/integrations/Monero.md
Normal file
37
spec/integrations/Monero.md
Normal file
@@ -0,0 +1,37 @@
|
||||
# Monero
|
||||
|
||||
### Addresses
|
||||
|
||||
Monero addresses are structs, defined as follows:
|
||||
|
||||
- `kind`: Enum {
|
||||
Standard,
|
||||
Subaddress,
|
||||
Featured { flags: u8 }
|
||||
}
|
||||
- `spend`: [u8; 32]
|
||||
- `view`: [u8; 32]
|
||||
|
||||
Integrated addresses are not supported due to only being able to send to one
|
||||
per Monero transaction. Supporting them would add a level of complexity
|
||||
to Serai which isn't worth it.
|
||||
|
||||
This definition of Featured Addresses is non-standard since the flags are
|
||||
represented by a u8, not a VarInt. Currently, only half of the bits are used,
|
||||
with no further planned features. Accordingly, it should be fine to fix its
|
||||
size. If needed, another enum entry for a 2-byte flags Featured Address could be
|
||||
added.
|
||||
|
||||
This definition is also non-standard by not having a Payment ID field. This is
|
||||
per not supporting integrated addresses.
|
||||
|
||||
### In Instructions
|
||||
|
||||
Monero In Instructions are present via `tx.extra`, specifically via inclusion
|
||||
in a `TX_EXTRA_NONCE` tag. The tag is followed by the VarInt length of its
|
||||
contents, and then additionally marked by a byte `127`. The following data is
|
||||
limited to 254 bytes.
|
||||
|
||||
### Out Instructions
|
||||
|
||||
Out Instructions ignore `data`.
|
||||
1
spec/media/icon.svg
Normal file
1
spec/media/icon.svg
Normal file
@@ -0,0 +1 @@
|
||||
<?xml version="1.0" encoding="UTF-8"?><svg id="Layer_1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 512 512"><defs><style>.cls-1{fill:#2753d1;fill-rule:evenodd;}</style></defs><path class="cls-1" d="m342.21,61.49c5.05-6.31,14.27-7.33,20.59-2.29l48.82,39.02c3.47,2.78,5.5,6.98,5.5,11.43s-2.02,8.65-5.5,11.43l-48.82,39.02c-6.32,5.05-15.53,4.03-20.59-2.29-5.05-6.31-4.03-15.52,2.29-20.57l16.22-12.96h-153.53c-45.84,0-83,37.13-83,82.93s37.16,82.93,83,82.93h97.64c8.09,0,14.65,6.55,14.65,14.63s-6.56,14.63-14.65,14.63h-97.64c-62.02,0-112.29-50.23-112.29-112.19s50.27-112.19,112.29-112.19h153.53l-16.22-12.96c-6.32-5.05-7.34-14.26-2.29-20.57Zm-149.67,145.73c0-8.08,6.56-14.63,14.65-14.63h97.64c62.02,0,112.29,50.23,112.29,112.2s-50.27,112.19-112.29,112.19h-153.53l16.22,12.96c6.32,5.05,7.34,14.26,2.29,20.57-5.05,6.31-14.27,7.33-20.59,2.28l-48.82-39.02c-3.47-2.78-5.5-6.98-5.5-11.43s2.02-8.65,5.5-11.43l48.82-39.03c6.32-5.05,15.53-4.03,20.59,2.29,5.05,6.31,4.03,15.52-2.29,20.57l-16.22,12.96h153.53c45.84,0,83-37.13,83-82.93s-37.16-82.93-83-82.93h-97.64c-8.09,0-14.65-6.55-14.65-14.63Z"/></svg>
|
||||
|
After Width: | Height: | Size: 1.1 KiB |
72
spec/policy/Canonical Chain.md
Normal file
72
spec/policy/Canonical Chain.md
Normal file
@@ -0,0 +1,72 @@
|
||||
# Canonical Chain
|
||||
|
||||
As Serai is a network connected to many external networks, at some point we will
|
||||
likely have to ask ourselves what the canonical chain for a network is. This
|
||||
document intends to establish soft, non-binding policy, in the hopes it'll guide
|
||||
most discussions on the matter.
|
||||
|
||||
The canonical chain is the chain Serai follows and honors transactions on. Serai
|
||||
does not guarantee operations availability nor integrity on any chains other
|
||||
than the canonical chain. Which chain is considered canonical is dependent on
|
||||
several factors.
|
||||
|
||||
### Finalization
|
||||
|
||||
Serai finalizes blocks from external networks onto itself. Once a block is
|
||||
finalized, it is considered irreversible. Accordingly, the primary tenet
|
||||
regarding what chain Serai will honor is the chain Serai has finalized. We can
|
||||
only assume the integrity of our coins on that chain.
|
||||
|
||||
### Node Software
|
||||
|
||||
Only node software which passes a quality threshold and actively identifies as
|
||||
belonging to an external network's protocol should be run. Never should a
|
||||
transformative node (a node trying to create a new network from an existing one)
|
||||
be run in place of a node actually for the external network. Beyond active
|
||||
identification, it must have community recognition as belonging.
|
||||
|
||||
If the majority of a community actively identifying as the network stands behind
|
||||
a hard fork, it should not be considered as a new network yet the next step of
|
||||
the existing one. If a hard fork breaks Serai's integrity, it should not be
|
||||
supported.
|
||||
|
||||
Multiple independent nodes should be run in order to reduce the likelihood of
|
||||
vulnerabilities to any specific node's faults.
|
||||
|
||||
### Rollbacks
|
||||
|
||||
Over time, various networks have rolled back in response to exploits. A rollback
|
||||
should undergo the same scrutiny as a hard fork. If the rollback breaks Serai's
|
||||
integrity, yet someone identifying as from the project offers to restore
|
||||
integrity out-of-band, integrity is considered kept so long as the offer is
|
||||
followed through on.
|
||||
|
||||
Since a rollback would break Serai's finalization policy, a technical note on
|
||||
how it could be implemented is provided.
|
||||
|
||||
Assume a blockchain from `0 .. 100` exists, with `100a ..= 500a` being rolled
|
||||
back blocks. The new chain extends from `99` with `100b ..= 200b`. Serai would
|
||||
define the canonical chain as `0 .. 100`, `100a ..= 500a`, `100b ..= 200b`, with
|
||||
`100b` building off `500a`. Serai would have to perform data-availability for
|
||||
`100a ..= 500a` (such as via a JSON file in-tree), and would have to modify the
|
||||
processor to edit its `Eventuality`s/UTXOs at `500a` back to the state at `99`.
|
||||
Any `Burn`s handled after `99` should be handled once again, if the transactions
|
||||
from `100a ..= 500a` cannot simply be carried over.
|
||||
|
||||
### On Fault
|
||||
|
||||
If the canonical chain does put Serai's coins into an invalid state,
|
||||
irreversibly and without amends, then the discrepancy should be amortized to all
|
||||
users as feasible, yet affected operations should otherwise halt if under
|
||||
permanent duress.
|
||||
|
||||
For example, if Serai lists a token which has a by-governance blacklist
|
||||
function, and is blacklisted without appeal, Serai should destroy all associated
|
||||
sriXYZ and cease operations.
|
||||
|
||||
If a bug, either in the chain or in Serai's own code, causes a loss of 10% of
|
||||
coins (without amends), operations should halt until all outputs in system can
|
||||
have their virtual amount reduced by a total amount of the loss,
|
||||
proportionalized to each output. Alternatively, Serai could decrease all token
|
||||
balances by 10%. All liquidity/swap operations should be halted until users are
|
||||
given proper time to withdraw, if they so choose, before operations resume.
|
||||
176
spec/processor/Multisig Rotation.md
Normal file
176
spec/processor/Multisig Rotation.md
Normal file
@@ -0,0 +1,176 @@
|
||||
# Multisig Rotation
|
||||
|
||||
Substrate is expected to determine when a new validator set instance will be
|
||||
created, and with it, a new multisig. Upon the successful creation of a new
|
||||
multisig, as determined by the new multisig setting their key pair on Substrate,
|
||||
rotation begins.
|
||||
|
||||
### Timeline
|
||||
|
||||
The following timeline is established:
|
||||
|
||||
1) The new multisig is created, and has its keys set on Serai. Once the next
|
||||
`Batch` with a new external network block is published, its block becomes the
|
||||
"queue block". The new multisig is set to activate at the "queue block", plus
|
||||
`CONFIRMATIONS` blocks (the "activation block").
|
||||
|
||||
We don't use the last `Batch`'s external network block, as that `Batch` may
|
||||
be older than `CONFIRMATIONS` blocks. Any yet-to-be-included-and-finalized
|
||||
`Batch` will be within `CONFIRMATIONS` blocks of what any processor has
|
||||
scanned however, as it'll wait for inclusion and finalization before
|
||||
continuing scanning.
|
||||
|
||||
2) Once the "activation block" itself has been finalized on Serai, UIs should
|
||||
start exclusively using the new multisig. If the "activation block" isn't
|
||||
finalized within `2 * CONFIRMATIONS` blocks, UIs should stop making
|
||||
transactions to any multisig on that network.
|
||||
|
||||
Waiting for Serai's finalization prevents a UI from using an unfinalized
|
||||
"activation block" before a re-organization to a shorter chain. If a
|
||||
transaction to Serai was carried from the unfinalized "activation block"
|
||||
to the shorter chain, it'd no longer be after the "activation block" and
|
||||
accordingly would be ignored.
|
||||
|
||||
We could not wait for Serai to finalize the block, yet instead wait for the
|
||||
block to have `CONFIRMATIONS` confirmations. This would prevent needing to
|
||||
wait for an indeterminate amount of time for Serai to finalize the
|
||||
"activation block", with the knowledge it should be finalized. Doing so would
|
||||
open UIs to eclipse attacks, where they live on an alternate chain where a
|
||||
possible "activation block" is finalized, yet Serai finalizes a distinct
|
||||
"activation block". If the alternate chain was longer than the finalized
|
||||
chain, the above issue would be reopened.
|
||||
|
||||
The reason for UIs stopping under abnormal behavior is as follows. Given a
|
||||
sufficiently delayed `Batch` for the "activation block", UIs will use the old
|
||||
multisig past the point it will be deprecated. Accordingly, UIs must realize
|
||||
when `Batch`s are so delayed and continued transactions are a risk. While
|
||||
`2 * CONFIRMATIONS` is presumably well within the 6 hour period (defined
|
||||
below), that period exists for low-fee transactions at time of congestion. It
|
||||
does not exist for UIs with old state, though it can be used to compensate
|
||||
for them (reducing the tolerance for inclusion delays). `2 * CONFIRMATIONS`
|
||||
is before the 6 hour period is enacted, preserving the tolerance for
|
||||
inclusion delays, yet still should only happen under highly abnormal
|
||||
circumstances.
|
||||
|
||||
In order to minimize the time it takes for "activation block" to be
|
||||
finalized, a `Batch` will always be created for it, regardless of it would
|
||||
otherwise have a `Batch` created.
|
||||
|
||||
3) The prior multisig continues handling `Batch`s and `Burn`s for
|
||||
`CONFIRMATIONS` blocks, plus 10 minutes, after the "activation block".
|
||||
|
||||
The first `CONFIRMATIONS` blocks is due to the fact the new multisig
|
||||
shouldn't actually be sent coins during this period, making it irrelevant.
|
||||
If coins are prematurely sent to the new multisig, they're artificially
|
||||
delayed until the end of the `CONFIRMATIONS` blocks plus 10 minutes period.
|
||||
This prevents an adversary from minting Serai tokens using coins in the new
|
||||
multisig, yet then burning them to drain the prior multisig, creating a lack
|
||||
of liquidity for several blocks.
|
||||
|
||||
The reason for the 10 minutes is to provide grace to honest UIs. Since UIs
|
||||
will wait until Serai confirms the "activation block" for keys before sending
|
||||
to them, which will take `CONFIRMATIONS` blocks plus some latency, UIs would
|
||||
make transactions to the prior multisig past the end of this period if it was
|
||||
`CONFIRMATIONS` alone. Since the next period is `CONFIRMATIONS` blocks, which
|
||||
is how long transactions take to confirm, transactions made past the end of
|
||||
this period would only received after the next period. After the next period,
|
||||
the prior multisig adds fees and a delay to all received funds (as it
|
||||
forwards the funds from itself to the new multisig). The 10 minutes provides
|
||||
grace for latency.
|
||||
|
||||
The 10 minutes is a delay on anyone who immediately transitions to the new
|
||||
multisig, in a no latency environment, yet the delay is preferable to fees
|
||||
from forwarding. It also should be less than 10 minutes thanks to various
|
||||
latencies.
|
||||
|
||||
4) The prior multisig continues handling `Batch`s and `Burn`s for another
|
||||
`CONFIRMATIONS` blocks.
|
||||
|
||||
This is for two reasons:
|
||||
|
||||
1) Coins sent to the new multisig still need time to gain sufficient
|
||||
confirmations.
|
||||
2) All outputs belonging to the prior multisig should become available within
|
||||
`CONFIRMATIONS` blocks.
|
||||
|
||||
All `Burn`s handled during this period should use the new multisig for the
|
||||
change address. This should effect a transfer of most outputs.
|
||||
|
||||
With the expected transfer of most outputs, and the new multisig receiving
|
||||
new external transactions, the new multisig takes the responsibility of
|
||||
signing all unhandled and newly emitted `Burn`s.
|
||||
|
||||
5) For the next 6 hours, all non-`Branch` outputs received are immediately
|
||||
forwarded to the new multisig. Only external transactions to the new multisig
|
||||
are included in `Batch`s.
|
||||
|
||||
The new multisig infers the `InInstruction`, and refund address, for
|
||||
forwarded `External` outputs via reading what they were for the original
|
||||
`External` output.
|
||||
|
||||
Alternatively, the `InInstruction`, with refund address explicitly included,
|
||||
could be included in the forwarding transaction. This may fail if the
|
||||
`InInstruction` omitted the refund address and is too large to fit in a
|
||||
transaction with one explicitly included. On such failure, the refund would
|
||||
be immediately issued instead.
|
||||
|
||||
6) Once the 6 hour period has expired, the prior multisig stops handling outputs
|
||||
it didn't itself create. Any remaining `Eventuality`s are completed, and any
|
||||
available/freshly available outputs are forwarded (creating new
|
||||
`Eventuality`s which also need to successfully resolve).
|
||||
|
||||
Once all the 6 hour period has expired, no `Eventuality`s remain, and all
|
||||
outputs are forwarded, the multisig publishes a final `Batch` of the first
|
||||
block, plus `CONFIRMATIONS`, which met these conditions, regardless of if it
|
||||
would've otherwise had a `Batch`. No further actions by it, nor its
|
||||
validators, are expected (unless, of course, those validators remain present
|
||||
in the new multisig).
|
||||
|
||||
7) The new multisig confirms all transactions from all prior multisigs were made
|
||||
as expected, including the reported `Batch`s.
|
||||
|
||||
Unfortunately, we cannot solely check the immediately prior multisig due to
|
||||
the ability for two sequential malicious multisigs to steal. If multisig
|
||||
`n - 2` only transfers a fraction of its coins to multisig `n - 1`, multisig
|
||||
`n - 1` can 'honestly' operate on the dishonest state it was given,
|
||||
laundering it. This would let multisig `n - 1` forward the results of its
|
||||
as-expected operations from a dishonest starting point to the new multisig,
|
||||
and multisig `n` would attest to multisig `n - 1`'s expected (and therefore
|
||||
presumed honest) operations, assuming liability. This would cause an honest
|
||||
multisig to face full liability for the invalid state, causing it to be fully
|
||||
slashed (as needed to reacquire any lost coins).
|
||||
|
||||
This would appear short-circuitable if multisig `n - 1` transfers coins
|
||||
exceeding the relevant Serai tokens' supply. Serai never expects to operate
|
||||
in an over-solvent state, yet balance should trend upwards due to a flat fee
|
||||
applied to each received output (preventing a griefing attack). Any balance
|
||||
greater than the tokens' supply may have had funds skimmed off the top, yet
|
||||
they'd still guarantee the solvency of Serai without any additional fees
|
||||
passed to users. Unfortunately, due to the requirement to verify the `Batch`s
|
||||
published (as else the Serai tokens' supply may be manipulated), this cannot
|
||||
actually be achieved (at least, not without a ZK proof the published `Batch`s
|
||||
were correct).
|
||||
|
||||
8) The new multisig publishes the next `Batch`, signifying the accepting of full
|
||||
responsibilities and a successful close of the prior multisig.
|
||||
|
||||
### Latency and Fees
|
||||
|
||||
Slightly before the end of step 3, the new multisig should start receiving new
|
||||
external outputs. These won't be confirmed for another `CONFIRMATIONS` blocks,
|
||||
and the new multisig won't start handling `Burn`s for another `CONFIRMATIONS`
|
||||
blocks plus 10 minutes. Accordingly, the new multisig should only become
|
||||
responsible for `Burn`s shortly after it has taken ownership of the stream of
|
||||
newly received coins.
|
||||
|
||||
Before it takes responsibility, it also should've been transferred all internal
|
||||
outputs under the standard scheduling flow. Any delayed outputs will be
|
||||
immediately forwarded, and external stragglers are only reported to Serai once
|
||||
sufficiently confirmed in the new multisig. Accordingly, liquidity should avoid
|
||||
fragmentation during rotation. The only latency should be on the 10 minutes
|
||||
present, and on delayed outputs, which should've been immediately usable, having
|
||||
to wait another `CONFIRMATIONS` blocks to be confirmed once forwarded.
|
||||
|
||||
Immediate forwarding does unfortunately prevent batching inputs to reduce fees.
|
||||
Given immediate forwarding only applies to latent outputs, considered
|
||||
exceptional, and the protocol's fee handling ensures solvency, this is accepted.
|
||||
126
spec/processor/Processor.md
Normal file
126
spec/processor/Processor.md
Normal file
@@ -0,0 +1,126 @@
|
||||
# Processor
|
||||
|
||||
The processor is a service which has an instance spawned per network. It is
|
||||
responsible for several tasks, from scanning an external network to signing
|
||||
transactions with payments.
|
||||
|
||||
This document primarily discusses its flow with regards to the coordinator.
|
||||
|
||||
### Generate Key
|
||||
|
||||
On `key_gen::CoordinatorMessage::GenerateKey`, the processor begins a pair of
|
||||
instances of the distributed key generation protocol specified in the FROST
|
||||
paper.
|
||||
|
||||
The first instance is for a key to use on the external network. The second
|
||||
instance is for a Ristretto public key used to publish data to the Serai
|
||||
blockchain. This pair of FROST DKG instances is considered a single instance of
|
||||
Serai's overall key generation protocol.
|
||||
|
||||
The commitments for both protocols are sent to the coordinator in a single
|
||||
`key_gen::ProcessorMessage::Commitments`.
|
||||
|
||||
### Key Gen Commitments
|
||||
|
||||
On `key_gen::CoordinatorMessage::Commitments`, the processor continues the
|
||||
specified key generation instance. The secret shares for each fellow
|
||||
participant are sent to the coordinator in a
|
||||
`key_gen::ProcessorMessage::Shares`.
|
||||
|
||||
#### Key Gen Shares
|
||||
|
||||
On `key_gen::CoordinatorMessage::Shares`, the processor completes the specified
|
||||
key generation instance. The generated key pair is sent to the coordinator in a
|
||||
`key_gen::ProcessorMessage::GeneratedKeyPair`.
|
||||
|
||||
### Confirm Key Pair
|
||||
|
||||
On `substrate::CoordinatorMessage::ConfirmKeyPair`, the processor starts using
|
||||
the newly confirmed key, scanning blocks on the external network for
|
||||
transfers to it.
|
||||
|
||||
### External Network Block
|
||||
|
||||
When the external network has a new block, which is considered finalized
|
||||
(either due to being literally finalized or due to having a sufficient amount
|
||||
of confirmations), it's scanned.
|
||||
|
||||
Outputs to the key of Serai's multisig are saved to the database. Outputs which
|
||||
newly transfer into Serai are used to build `Batch`s for the block. The
|
||||
processor then begins a threshold signature protocol with its key pair's
|
||||
Ristretto key to sign the `Batch`s.
|
||||
|
||||
The `Batch`s are each sent to the coordinator in a
|
||||
`substrate::ProcessorMessage::Batch`, enabling the coordinator to know what
|
||||
`Batch`s *should* be published to Serai. After each
|
||||
`substrate::ProcessorMessage::Batch`, the preprocess for the first instance of
|
||||
its signing protocol is sent to the coordinator in a
|
||||
`coordinator::ProcessorMessage::BatchPreprocess`.
|
||||
|
||||
As a design comment, we *may* be able to sign now possible, already scheduled,
|
||||
branch/leaf transactions at this point. Doing so would be giving a mutable
|
||||
borrow over the scheduler to both the external network and the Serai network,
|
||||
and would accordingly be unsafe. We may want to look at splitting the scheduler
|
||||
in two, in order to reduce latency (TODO).
|
||||
|
||||
### Batch Preprocesses
|
||||
|
||||
On `coordinator::CoordinatorMessage::BatchPreprocesses`, the processor
|
||||
continues the specified batch signing protocol, sending
|
||||
`coordinator::ProcessorMessage::BatchShare` to the coordinator.
|
||||
|
||||
### Batch Shares
|
||||
|
||||
On `coordinator::CoordinatorMessage::BatchShares`, the processor
|
||||
completes the specified batch signing protocol. If successful, the processor
|
||||
stops signing for this batch and sends
|
||||
`substrate::ProcessorMessage::SignedBatch` to the coordinator.
|
||||
|
||||
### Batch Re-attempt
|
||||
|
||||
On `coordinator::CoordinatorMessage::BatchReattempt`, the processor will create
|
||||
a new instance of the batch signing protocol. The new protocol's preprocess is
|
||||
sent to the coordinator in a `coordinator::ProcessorMessage::BatchPreprocess`.
|
||||
|
||||
### Substrate Block
|
||||
|
||||
On `substrate::CoordinatorMessage::SubstrateBlock`, the processor:
|
||||
|
||||
1) Marks all blocks, up to the external block now considered finalized by
|
||||
Serai, as having had their batches signed.
|
||||
2) Adds the new outputs from newly finalized blocks to the scheduler, along
|
||||
with the necessary payments from `Burn` events on Serai.
|
||||
3) Sends a `substrate::ProcessorMessage::SubstrateBlockAck`, containing the IDs
|
||||
of all plans now being signed for, to the coordinator.
|
||||
4) Sends `sign::ProcessorMessage::Preprocess` for each plan now being signed
|
||||
for.
|
||||
|
||||
### Sign Preprocesses
|
||||
|
||||
On `sign::CoordinatorMessage::Preprocesses`, the processor continues the
|
||||
specified transaction signing protocol, sending `sign::ProcessorMessage::Share`
|
||||
to the coordinator.
|
||||
|
||||
### Sign Shares
|
||||
|
||||
On `sign::CoordinatorMessage::Shares`, the processor completes the specified
|
||||
transaction signing protocol. If successful, the processor stops signing for
|
||||
this transaction and publishes the signed transaction. Then,
|
||||
`sign::ProcessorMessage::Completed` is sent to the coordinator, to be
|
||||
broadcasted to all validators so everyone can observe the attempt completed,
|
||||
producing a signed and published transaction.
|
||||
|
||||
### Sign Re-attempt
|
||||
|
||||
On `sign::CoordinatorMessage::Reattempt`, the processor will create a new
|
||||
a new instance of the transaction signing protocol if it hasn't already
|
||||
completed/observed completion of an instance of the signing protocol. The new
|
||||
protocol's preprocess is sent to the coordinator in a
|
||||
`sign::ProcessorMessage::Preprocess`.
|
||||
|
||||
### Sign Completed
|
||||
|
||||
On `sign::CoordinatorMessage::Completed`, the processor verifies the included
|
||||
transaction hash actually refers to an accepted transaction which completes the
|
||||
plan it was supposed to. If so, the processor stops locally signing for the
|
||||
transaction, and emits `sign::ProcessorMessage::Completed` if it hasn't prior.
|
||||
31
spec/processor/Scanning.md
Normal file
31
spec/processor/Scanning.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Scanning
|
||||
|
||||
Only blocks with finality, either actual or sufficiently probabilistic, are
|
||||
operated upon. This is referred to as a block with `CONFIRMATIONS`
|
||||
confirmations, the block itself being the first confirmation.
|
||||
|
||||
For chains which promise finality on a known schedule, `CONFIRMATIONS` is set to
|
||||
`1` and each group of finalized blocks is treated as a single block, with the
|
||||
tail block's hash representing the entire group.
|
||||
|
||||
For chains which offer finality, on an unknown schedule, `CONFIRMATIONS` is
|
||||
still set to `1` yet blocks aren't aggregated into a group. They're handled
|
||||
individually, yet only once finalized. This allows networks which form
|
||||
finalization erratically to not have to agree on when finalizations were formed,
|
||||
solely that the blocks contained have a finalized descendant.
|
||||
|
||||
### Notability, causing a `Batch`
|
||||
|
||||
`Batch`s are only created for blocks which it benefits to achieve ordering on.
|
||||
These are:
|
||||
|
||||
- Blocks which contain transactions relevant to Serai
|
||||
- Blocks which in which a new multisig activates
|
||||
- Blocks in which a prior multisig retires
|
||||
|
||||
### Waiting for `Batch` inclusion
|
||||
|
||||
Once a `Batch` is created, it is expected to eventually be included on Serai.
|
||||
If the `Batch` isn't included within `CONFIRMATIONS` blocks of its creation, the
|
||||
scanner will wait until its inclusion before scanning
|
||||
`batch_block + CONFIRMATIONS`.
|
||||
228
spec/processor/UTXO Management.md
Normal file
228
spec/processor/UTXO Management.md
Normal file
@@ -0,0 +1,228 @@
|
||||
# UTXO Management
|
||||
|
||||
UTXO-based chains have practical requirements for efficient operation which can
|
||||
effectively be guaranteed to terminate with a safe end state. This document
|
||||
attempts to detail such requirements, and the implementations in Serai resolving
|
||||
them.
|
||||
|
||||
## Fees From Effecting Transactions Out
|
||||
|
||||
When `sriXYZ` is burnt, Serai is expected to create an output for `XYZ` as
|
||||
instructed. The transaction containing this output will presumably have some fee
|
||||
necessitating payment. Serai linearly amortizes this fee over all outputs this
|
||||
transaction intends to create in response to burns.
|
||||
|
||||
While Serai could charge a fee in advance, either static or dynamic to views of
|
||||
the fee market, it'd risk the fee being inaccurate. If it's too high, users have
|
||||
paid fees they shouldn't have. If it's too low, Serai is insolvent. This is why
|
||||
the actual fee is amortized, rather than an estimation being prepaid.
|
||||
|
||||
Serai could report a view, and when burning occurred, that view could be locked
|
||||
in as the basis for transaction fees as used to fulfill the output in question.
|
||||
This would require burns specify the most recent fee market view they're aware
|
||||
of, signifying their agreeance, with Serai erroring is a new view is published
|
||||
before the burn is included on-chain. Not only would this require more data be
|
||||
published to Serai (widening data pipeline requirements), it'd prevent any
|
||||
RBF-based solutions to dynamic fee markets causing transactions to get stuck.
|
||||
|
||||
## Output Frequency
|
||||
|
||||
Outputs can be created on an external network at rate
|
||||
`max_outputs_per_tx / external_tick_rate`, where `external_tick_rate` is the
|
||||
external's network limitations on spending outputs. While `external_tick_rate`
|
||||
is generally writable as zero, due to mempool chaining, some external networks
|
||||
may not allow spending outputs from transactions which have yet to be ordered.
|
||||
Monero only allows spending outputs from transactions who have 10 confirmations,
|
||||
for its own security.
|
||||
|
||||
Serai defines its own tick rate per external network, such that
|
||||
`serai_tick_rate >= external_tick_rate`. This ensures that Serai never assumes
|
||||
availability before actual availability. `serai_tick_rate` is also `> 0`. This
|
||||
is since a zero `external_tick_rate` generally does not truly allow an infinite
|
||||
output creation rate due to limitations on the amount of transactions allowed
|
||||
in the mempool.
|
||||
|
||||
Define `output_creation_rate` as `max_outputs_per_tx / serai_tick_rate`. Under a
|
||||
naive system which greedily accumulates inputs and linearly processes outputs,
|
||||
this is the highest speed at which outputs which may be processed.
|
||||
|
||||
If the Serai blockchain enables burning sriXYZ at a rate exceeding
|
||||
`output_creation_rate`, a backlog would form. This backlog could linearly grow
|
||||
at a rate larger than the outputs could linearly shrink, creating an
|
||||
ever-growing backlog, performing a DoS against Serai.
|
||||
|
||||
One solution would be to increase the fee associated with burning sriXYZ when
|
||||
approaching `output_creation_rate`, making such a DoS unsustainable. This would
|
||||
require the Serai blockchain be aware of each external network's
|
||||
`output_creation_rate` and implement such a sliding fee. This 'solution' isn't
|
||||
preferred as it still temporarily has a growing queue, and normal users would
|
||||
also be affected by the increased fees.
|
||||
|
||||
The solution implemented into Serai is to consume all burns from the start of a
|
||||
global queue which can be satisfied under currently available inputs. While the
|
||||
consumed queue may have 256 items, which can't be processed within a single tick
|
||||
by an external network whose `output_creation_rate` is 16, Serai can immediately
|
||||
set a finite bound on execution duration.
|
||||
|
||||
For the above example parameters, Serai would create 16 outputs within its tick,
|
||||
ignoring the necessity of a change output. These 16 outputs would _not_ create
|
||||
any outputs Serai is expected to create in response to burns, yet instead create
|
||||
16 "branch" outputs. One tick later, when the branch outputs are available to
|
||||
spend, each would fund creating of 16 expected outputs.
|
||||
|
||||
For `e` expected outputs, the execution duration is just `log e` ticks _with the
|
||||
base of the logarithm being `output_creation_rate`_. Since these `e` expected
|
||||
outputs are consumed from the linearly-implemented global queue into their own
|
||||
tree structure, execution duration cannot be extended. We can also re-consume
|
||||
the entire global queue (barring input availability, see next section) after
|
||||
just one tick, when the change output becomes available again.
|
||||
|
||||
Due to the logarithmic complexity of fulfilling burns, attacks require
|
||||
exponential growth (which is infeasible to scale). This solution does not
|
||||
require a sliding fee on Serai's side due to not needing to limit the on-chain
|
||||
rate of burns, which means it doesn't so adversely affect normal users. While
|
||||
an increased tree depth will increase the amount of transactions needed to
|
||||
fulfill an output, increasing the fee amortized over the output and its
|
||||
siblings, this fee scales linearly with the logarithmically scaling tree depth.
|
||||
This is considered acceptable.
|
||||
|
||||
## Input Availability
|
||||
|
||||
The following section refers to spending an output, and then spending it again.
|
||||
Spending it again, which is impossible under the UTXO model, refers to spending
|
||||
the change output of the transaction it was spent in. The following section
|
||||
also assumes any published transaction is immediately ordered on-chain, ignoring
|
||||
the potential for latency from mempool to blockchain (as it is assumed to have a
|
||||
negligible effect in practice).
|
||||
|
||||
When a burn for amount `a` is issued, the sum amount of immediately available
|
||||
inputs may be `< a`. This is because despite each output being considered usable
|
||||
on a tick basis, there is no global tick. Each output may or may not be
|
||||
spendable at some moment, and spending it will prevent its availability for one
|
||||
tick of a clock newly started.
|
||||
|
||||
This means all outputs will become available by simply waiting a single tick,
|
||||
without spending any outputs during the waited tick. Any outputs unlocked at the
|
||||
start of the tick will carry, and within the tick the rest of the outputs will
|
||||
become unlocked.
|
||||
|
||||
This means that within a tick of operations, the full balance of Serai can be
|
||||
considered unlocked and used to consume the entire global queue. While Serai
|
||||
could wait for all its outputs to be available before popping from the front of
|
||||
the global queue, eager execution as enough inputs become available provides
|
||||
lower latency. Considering the tick may be an hour (as in the case of Bitcoin),
|
||||
this is very appreciated.
|
||||
|
||||
If a full tick is waited for, due to the front of the global queue having a
|
||||
notably large burn, then the entire global queue will be consumed as full input
|
||||
availability means the ability to satisfy all potential burns in a solvent
|
||||
system.
|
||||
|
||||
## Fees Incurred During Operations
|
||||
|
||||
While fees incurred when satisfying burn were covered above, with documentation
|
||||
on how solvency is maintained, two other operating costs exists.
|
||||
|
||||
1) Input accumulation
|
||||
2) Multisig rotations
|
||||
|
||||
Input accumulation refers to transactions which exist to merge inputs. Just as
|
||||
there is a `max_outputs_per_tx`, there is a `max_inputs_per_tx`. When the amount
|
||||
of inputs belonging to Serai exceeds `max_inputs_per_tx`, a TX merging them is
|
||||
created. This TX incurs fees yet has no outputs mapping to burns to amortize
|
||||
them over, accumulating operating costs.
|
||||
|
||||
Please note that this merging occurs in parallel to create a logarithmic
|
||||
execution, similar to how outputs are also processed in parallel.
|
||||
|
||||
As for multisig rotation, multisig rotation occurs when a new multisig for an
|
||||
external network is created and the old multisig must transfer its inputs in
|
||||
order for Serai to continue its operations. This operation also incurs fees
|
||||
without having outputs immediately available to amortize over.
|
||||
|
||||
Serai could charge fees on received outputs, deducting from the amount of
|
||||
`sriXYZ` minted in order to cover these operating fees. An overt amount would be
|
||||
deducted to practically ensure solvency, forming a buffer. Once the buffer is
|
||||
filled, fees would be reduced. As the buffer drains, fees would go back up.
|
||||
|
||||
This would keep charged fees in line with actual fees, once the buffer is
|
||||
initially filled, yet requires:
|
||||
|
||||
1) Creating and tracking a buffer
|
||||
2) Overcharging some users on fees
|
||||
|
||||
while still risking insolvency, if the actual fees keep increasing in a way
|
||||
preventing successful estimation.
|
||||
|
||||
The solution Serai implements is to accrue operating costs, tracking with each
|
||||
created transaction the running operating costs. When a created transaction has
|
||||
payments out, all of the operating costs incurred so far, which have yet to be
|
||||
amortized, are immediately and fully amortized.
|
||||
|
||||
## Attacks by a Malicious Miner
|
||||
|
||||
There is the concern that a significant amount of outputs could be created,
|
||||
which when merged as inputs, create a significant amount of operating costs.
|
||||
This would then be forced onto random users who burn `sriXYZ` soon after, while
|
||||
the party who caused the operating costs would then be able to burn their own
|
||||
`sriXYZ` without notable fees.
|
||||
|
||||
To describe this attack in its optimal form, assume a sole malicious block
|
||||
producer for an external network. The malicious miner adds an output to Serai,
|
||||
not paying any fees as the block producer. This single output alone may trigger
|
||||
an aggregation transaction. Serai would pay for the transaction fee, the fee
|
||||
going to the malicious miner.
|
||||
|
||||
When Serai users burn `sriXYZ`, they are hit with the aggregation transaction's
|
||||
fee plus the normally amortized fee. Then, the malicious miner burns their
|
||||
`sriXYZ`, having the fee they capture be amortized over their output. In this
|
||||
process, they remain net except for the increased transaction fees they gain
|
||||
from other users, which they profit.
|
||||
|
||||
To limit this attack vector, a flat fee of
|
||||
`2 * (the estimation of a 2-input-merging transaction fee)` is applied to each
|
||||
input. This means, assuming an inability to manipulate Serai's fee estimations,
|
||||
creating an output to force a merge transaction (and the associated fee) costs
|
||||
the attacker twice as much as the associated fee.
|
||||
|
||||
A 2-input TX's fee is used as aggregating multiple inputs at once actually
|
||||
yields in Serai's favor so long as the per-input fee exceeds the cost of the
|
||||
per-input addition to the TX. Since the per-input fee is the cost of an entire
|
||||
TX, this property is true.
|
||||
|
||||
### Profitability Without the Flat Fee With a Minority of Hash Power
|
||||
|
||||
Ignoring the above flat fee, a malicious miner could use aggregating multiple
|
||||
inputs to achieve profit with a minority of hash power. The following is how a
|
||||
miner with 7% of the external network's hash power could execute this attack
|
||||
profitably over a network with a `max_inputs_per_tx` value of 16:
|
||||
|
||||
1) Mint `sriXYZ` with 256 outputs during their own blocks. This incurs no fees
|
||||
and would force 16 aggregation transactions to be created.
|
||||
|
||||
2) _A miner_, which has a 7% chance of being the malicious miner, collects the
|
||||
16 transaction fees.
|
||||
|
||||
3) The malicious miner burns their sriXYZ, with a 7% chance of collecting their
|
||||
own fee or a 93% chance of losing a single transaction fee.
|
||||
|
||||
16 attempts would cost 16 transaction fees if they always lose their single
|
||||
transaction fee. Gaining the 16 transaction fees once, offsetting costs, is
|
||||
expected to happen with just 6.25% of the hash power. Since the malicious miner
|
||||
has 7%, they're statistically likely to recoup their costs and eventually turn
|
||||
a profit.
|
||||
|
||||
With a flat fee of at least the cost to aggregate a single input in a full
|
||||
aggregation transaction, this attack falls apart. Serai's flat fee is the higher
|
||||
cost of the fee to aggregate two inputs in an aggregation transaction.
|
||||
|
||||
### Solvency Without the Flat Fee
|
||||
|
||||
Even without the above flat fee, Serai remains solvent. With the above flat fee,
|
||||
malicious miners on external networks can only steal from other users if they
|
||||
can manipulate Serai's fee estimations so that the merge transaction fee used is
|
||||
twice as high as the fees charged for causing a merge transaction. This is
|
||||
assumed infeasible to perform at scale, yet even if demonstrated feasible, it
|
||||
would not be a critical vulnerability against Serai. Solely a low/medium/high
|
||||
vulnerability against the users (though one it would still be our responsibility
|
||||
to rectify).
|
||||
46
spec/protocol/Constants.md
Normal file
46
spec/protocol/Constants.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Constants
|
||||
|
||||
### Types
|
||||
|
||||
These are the list of types used to represent various properties within the
|
||||
protocol.
|
||||
|
||||
| Alias | Type |
|
||||
|-----------------|----------------------------------------------|
|
||||
| SeraiAddress | sr25519::Public (unchecked [u8; 32] wrapper) |
|
||||
| Amount | u64 |
|
||||
| NetworkId | NetworkId (Rust enum, SCALE-encoded) |
|
||||
| Coin | Coin (Rust enum, SCALE-encoded) |
|
||||
| Session | u32 |
|
||||
| Validator Set | (NetworkId, Session) |
|
||||
| Key | BoundedVec\<u8, 96> |
|
||||
| KeyPair | (SeraiAddress, Key) |
|
||||
| ExternalAddress | BoundedVec\<u8, 196> |
|
||||
| Data | BoundedVec\<u8, 512> |
|
||||
|
||||
### Networks
|
||||
|
||||
Every network connected to Serai operates over a specific curve. The processor
|
||||
generates a distinct set of keys per network. Beyond the key-generation itself
|
||||
being isolated, the generated keys are further bound to their respective
|
||||
networks via an additive offset created by hashing the network's name (among
|
||||
other properties). The network's key is used for all coins on that network.
|
||||
|
||||
| Network | Curve | ID |
|
||||
|----------|-----------|----|
|
||||
| Serai | Ristretto | 0 |
|
||||
| Bitcoin | Secp256k1 | 1 |
|
||||
| Ethereum | Secp256k1 | 2 |
|
||||
| Monero | Ed25519 | 3 |
|
||||
|
||||
### Coins
|
||||
|
||||
Coins exist over a network and have a distinct integer ID.
|
||||
|
||||
| Coin | Network | ID |
|
||||
|----------|----------|----|
|
||||
| Serai | Serai | 0 |
|
||||
| Bitcoin | Bitcoin | 1 |
|
||||
| Ether | Ethereum | 2 |
|
||||
| DAI | Ethereum | 3 |
|
||||
| Monero | Monero | 4 |
|
||||
8
spec/protocol/In Instructions.md
Normal file
8
spec/protocol/In Instructions.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# In Instructions
|
||||
|
||||
In Instructions are included onto the Serai blockchain via unsigned
|
||||
transactions. In order to ensure the integrity of the included instructions, the
|
||||
validator set responsible for the network in question produces a threshold
|
||||
signature of their authenticity.
|
||||
|
||||
This lets all other validators verify the instructions with an O(1) operation.
|
||||
54
spec/protocol/Validator Sets.md
Normal file
54
spec/protocol/Validator Sets.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Validator Sets
|
||||
|
||||
Validator Sets are defined at the protocol level, with the following parameters:
|
||||
|
||||
- `network` (NetworkId): The network this validator set
|
||||
operates over.
|
||||
- `allocation_per_key_share` (Amount): Amount of stake needing allocation
|
||||
in order to receive a key share.
|
||||
|
||||
### Participation in Consensus
|
||||
|
||||
The validator set for `NetworkId::Serai` participates in Serai's own consensus,
|
||||
producing and finalizing blocks.
|
||||
|
||||
### Multisig
|
||||
|
||||
Every Validator Set is expected to form a `t`-of-`n` multisig, where `n` is the
|
||||
amount of key shares in the Validator Set and `t` is `n * 2 / 3 + 1`, for each
|
||||
of its networks. This multisig is secure to hold coins valued at up to 33% of
|
||||
the Validator Set's allocated stake. If the coins exceed that threshold, there's
|
||||
more value in the multisig and associated liquidity pool than in the
|
||||
supermajority of allocated stake securing them both. Accordingly, it'd be no
|
||||
longer financially secure, and it MUST reject newly added coins.
|
||||
|
||||
### Multisig Creation
|
||||
|
||||
Multisigs are created by Processors, communicating via their Coordinators.
|
||||
They're then confirmed on chain via the `validator-sets` pallet. This is done by
|
||||
having 100% of participants agree on the resulting group key. While this isn't
|
||||
fault tolerant regarding liveliness, a malicious actor who forces a `t`-of-`n`
|
||||
multisig to be `t`-of-`n-1` reduces the fault tolerance of the created multisig
|
||||
which is a greater issue. If a node does prevent multisig creation, other
|
||||
validators should issue slashes for it/remove it from the Validator Set
|
||||
entirely.
|
||||
|
||||
Placing the creation on chain also solves the question of if the multisig was
|
||||
successfully created or not. Processors cannot simply ask each other if they
|
||||
succeeded without creating an instance of the Byzantine Generals Problem.
|
||||
Placing results within a Byzantine Fault Tolerant system resolves this.
|
||||
|
||||
### Multisig Rotation
|
||||
|
||||
Please see `processor/Multisig Rotation.md` for details on the timing.
|
||||
|
||||
Once the new multisig publishes its first `Batch`, the old multisig's keys are
|
||||
cleared and the set is considered retired. After a one-session cooldown period,
|
||||
they may deallocate their stake.
|
||||
|
||||
### Set Keys (message)
|
||||
|
||||
- `network` (Network): Network whose key is being set.
|
||||
- `key_pair` (KeyPair): Key pair being set for this `Session`.
|
||||
- `signature` (Signature): A MuSig-style signature of all validators,
|
||||
confirming this key.
|
||||
Reference in New Issue
Block a user