mirror of
https://github.com/serai-dex/serai.git
synced 2025-12-09 04:39:24 +00:00
Initial Tendermint implementation (#145)
* Machine without timeouts
* Time code
* Move substrate/consensus/tendermint to substrate/tendermint
* Delete the old paper doc
* Refactor out external parts to generics
Also creates a dedicated file for the message log.
* Refactor <V, B> to type V, type B
* Successfully compiling
* Calculate timeouts
* Fix test
* Finish timeouts
* Misc cleanup
* Define a signature scheme trait
* Implement serialization via parity's scale codec
Ideally, this would be generic. Unfortunately, the generic API serde
doesn't natively support borsh, nor SCALE, and while there is a serde
SCALE crate, it's old. While it may be complete, it's not worth working
with.
While we could still grab bincode, and a variety of other formats, it
wasn't worth it to go custom and for Serai, we'll be using SCALE almost
everywhere anyways.
* Implement usage of the signature scheme
* Make the infinite test non-infinite
* Provide a dedicated signature in Precommit of just the block hash
Greatly simplifies verifying when syncing.
* Dedicated Commit object
Restores sig aggregation API.
* Tidy README
* Document tendermint
* Sign the ID directly instead of its SCALE encoding
For a hash, which is fixed-size, these should be the same yet this helps
move past the dependency on SCALE. It also, for any type where the two
values are different, smooths integration.
* Litany of bug fixes
Also attempts to make the code more readable while updating/correcting
documentation.
* Remove async recursion
Greatly increases safety as well by ensuring only one message is
processed at once.
* Correct timing issues
1) Commit didn't include the round, leaving the clock in question.
2) Machines started with a local time, instead of a proper start time.
3) Machines immediately started the next block instead of waiting for
the block time.
* Replace MultiSignature with sr25519::Signature
* Minor SignatureScheme API changes
* Map TM SignatureScheme to Substrate's sr25519
* Initial work on an import queue
* Properly use check_block
* Rename import to import_queue
* Implement tendermint_machine::Block for Substrate Blocks
Unfortunately, this immediately makes Tendermint machine capable of
deployment as crate since it uses a git reference. In the future, a
Cargo.toml patch section for serai/substrate should be investigated.
This is being done regardless as it's the quickest way forward and this
is for Serai.
* Dummy Weights
* Move documentation to the top of the file
* Move logic into TendermintImport itself
Multiple traits exist to verify/handle blocks. I'm unsure exactly when
each will be called in the pipeline, so the easiest solution is to have
every step run every check.
That would be extremely computationally expensive if we ran EVERY check,
yet we rely on Substrate for execution (and according checks), which are
limited to just the actual import function.
Since we're calling this code from many places, it makes sense for it to
be consolidated under TendermintImport.
* BlockImport, JustificationImport, Verifier, and import_queue function
* Update consensus/lib.rs from PoW to Tendermint
Not possible to be used as the previous consensus could. It will not
produce blocks nor does it currenly even instantiate a machine. This is
just he next step.
* Update Cargo.tomls for substrate packages
* Tendermint SelectChain
This is incompatible with Substrate's expectations, yet should be valid
for ours
* Move the node over to the new SelectChain
* Minor tweaks
* Update SelectChain documentation
* Remove substrate/node lib.rs
This shouldn't be used as a library AFAIK. While runtime should be, and
arguably should even be published, I have yet to see node in the same
way. Helps tighten API boundaries.
* Remove unused macro_use
* Replace panicking todos with stubs and // TODO
Enables progress.
* Reduce chain_spec and use more accurate naming
* Implement block proposal logic
* Modularize to get_proposal
* Trigger block importing
Doesn't wait for the response yet, which it needs to.
* Get the result of block importing
* Split import_queue into a series of files
* Provide a way to create the machine
The BasicQueue returned obscures the TendermintImport struct.
Accordingly, a Future scoped with access is returned upwards, which when
awaited will create the machine. This makes creating the machine
optional while maintaining scope boundaries.
Is sufficient to create a 1-node net which produces and finalizes
blocks.
* Don't import justifications multiple times
Also don't broadcast blocks which were solely proposed.
* Correct justication import pipeline
Removes JustificationImport as it should never be used.
* Announce blocks
By claiming File, they're not sent ovber the P2P network before they
have a justification, as desired. Unfortunately, they never were. This
works around that.
* Add an assert to verify proposed children aren't best
* Consolidate C and I generics into a TendermintClient trait alias
* Expand sanity checks
Substrate doesn't expect nor officially support children with less work
than their parents. It's a trick used here. Accordingly, ensure the
trick's validity.
* When resetting, use the end time of the round which was committed to
The machine reset to the end time of the current round. For a delayed
network connection, a machine may move ahead in rounds and only later
realize a prior round succeeded. Despite acknowledging that round's
success, it would maintain its delay when moving to the next block,
bricking it.
Done by tracking the end time for each round as they occur.
* Move Commit from including the round to including the round's end_time
The round was usable to build the current clock in an accumulated
fashion, relative to the previous round. The end time is the absolute
metric of it, which can be used to calculate the round number (with all
previous end times).
Substrate now builds off the best block, not genesis, using the end time
included in the justification to start its machine in a synchronized
state.
Knowing the end time of a round, or the round in which block was
committed to, is necessary for nodes to sync up with Tendermint.
Encoding it in the commit ensures it's long lasting and makes it readily
available, without the load of an entire transaction.
* Add a TODO on Tendermint
* Misc bug fixes
* More misc bug fixes
* Clean up lock acquisition
* Merge weights and signing scheme into validators, documenting needed changes
* Add pallet sessions to runtime, create pallet-tendermint
* Update node to use pallet sessions
* Update support URL
* Partial work on correcting pallet calls
* Redo Tendermint folder structure
* TendermintApi, compilation fixes
* Fix the stub round robin
At some point, the modulus was removed causing it to exceed the
validators list and stop proposing.
* Use the validators list from the session pallet
* Basic Gossip Validator
* Correct Substrate Tendermint start block
The Tendermint machine uses the passed in number as the block's being
worked on number. Substrate passed in the already finalized block's
number.
Also updates misc comments.
* Clean generics in Tendermint with a monolith with associated types
* Remove the Future triggering the machine for an async fn
Enables passing data in, such as the network.
* Move TendermintMachine from start_num, time to last_num, time
Provides an explicitly clear API clearer to program around.
Also adds additional time code to handle an edge case.
* Connect the Tendermint machine to a GossipEngine
* Connect broadcast
* Remove machine from TendermintImport
It's not used there at all.
* Merge Verifier into block_import.rs
These two files were largely the same, just hooking into sync structs
with almost identical imports. As this project shapes up, removing dead
weight is appreciated.
* Create a dedicated file for being a Tendermint authority
* Deleted comment code related to PoW
* Move serai_runtime specific code from tendermint/client to node
Renames serai-consensus to sc_tendermint
* Consolidate file structure in sc_tendermint
* Replace best_* with finalized_*
We test their equivalency yet still better to use finalized_* in
general.
* Consolidate references to sr25519 in sc_tendermint
* Add documentation to public structs/functions in sc_tendermint
* Add another missing comment
* Make sign asynchronous
Some relation to https://github.com/serai-dex/serai/issues/95.
* Move sc_tendermint to async sign
* Implement proper checking of inherents
* Take in a Keystore and validator ID
* Remove unnecessary PhantomDatas
* Update node to latest sc_tendermint
* Configure node for a multi-node testnet
* Fix handling of the GossipEngine
* Use a rounded genesis to obtain sufficient synchrony within the Docker env
* Correct Serai d-f names in Docker
* Remove an attempt at caching I don't believe would ever hit
* Add an already in chain check to block import
While the inner should do this for us, we call verify_order on our end
*before* inner to ensure sequential import. Accordingly, we need to
provide our own check.
Removes errors of "non-sequential import" when trying to re-import an
existing block.
* Update the consensus documentation
It was incredibly out of date.
* Add a _ to the validator arg in slash
* Make the dev profile a local testnet profile
Restores a dev profile which only has one validator, locally running.
* Reduce Arcs in TendermintMachine, split Signer from SignatureScheme
* Update sc_tendermint per previous commit
* Restore cache
* Remove error case which shouldn't be an error
* Stop returning errors on already existing blocks entirely
* Correct Dave, Eve, and Ferdie to not run as validators
* Rename dev to devnet
--dev still works thanks to the |. Acheieves a personal preference of
mine with some historical meaning.
* Add message expiry to the Tendermint gossip
* Localize the LibP2P protocol to the blockchain
Follows convention by doing so. Theoretically enables running multiple
blockchains over a single LibP2P connection.
* Add a version to sp-runtime in tendermint-machine
* Add missing trait
* Bump Substrate dependency
Fixes #147.
* Implement Schnorr half-aggregation from https://eprint.iacr.org/2021/350.pdf
Relevant to https://github.com/serai-dex/serai/issues/99.
* cargo update (tendermint)
* Move from polling loops to a pure IO model for sc_tendermint's gossip
* Correct protocol name handling
* Use futures mpsc instead of tokio
* Timeout futures
* Move from a yielding loop to select in tendermint-machine
* Update Substrate to the new TendermintHandle
* Use futures pin instead of tokio
* Only recheck blocks with non-fatal inherent transaction errors
* Update to the latest substrate
* Separate the block processing time from the latency
* Add notes to the runtime
* Don't spam slash
Also adds a slash condition of failing to propose.
* Support running TendermintMachine when not a validator
This supports validators who leave the current set, without crashing
their nodes, along with nodes trying to become validators (who will now
seamlessly transition in).
* Properly define and pass around the block size
* Correct the Duration timing
The proposer will build it, send it, then process it (on the first
round). Accordingly, it's / 3, not / 2, as / 2 only accounted for the
latter events.
* Correct time-adjustment code on round skip
* Have the machine respond to advances made by an external sync loop
* Clean up time code in tendermint-machine
* BlockData and RoundData structs
* Rename Round to RoundNumber
* Move BlockData to a new file
* Move Round to an Option due to the pseudo-uninitialized state we create
Before the addition of RoundData, we always created the round, and on
.round(0), simply created it again. With RoundData, and the changes to
the time code, we used round 0, time 0, the latter being incorrect yet
not an issue due to lack of misuse.
Now, if we do misuse it, it'll panic.
* Clear the Queue instead of draining and filtering
There shouldn't ever be a message which passes the filter under the
current design.
* BlockData::new
* Move more code into block.rs
Introduces type-aliases to obtain Data/Message/SignedMessage solely from
a Network object.
Fixes a bug regarding stepping when you're not an active validator.
* Have verify_precommit_signature return if it verified the signature
Also fixes a bug where invalid precommit signatures were left standing
and therefore contributing to commits.
* Remove the precommit signature hash
It cached signatures per-block. Precommit signatures are bound to each
round. This would lead to forming invalid commits when a commit should
be formed. Under debug, the machine would catch that and panic. On
release, it'd have everyone who wasn't a validator fail to continue
syncing.
* Slight doc changes
Also flattens the message handling function by replacing an if
containing all following code in the function with an early return for
the else case.
* Always produce notifications for finalized blocks via origin overrides
* Correct weird formatting
* Update to the latest tendermint-machine
* Manually step the Tendermint machine when we synced a block over the network
* Ignore finality notifications for old blocks
* Remove a TODO resolved in 8c51bc011d
* Add a TODO comment to slash
Enables searching for the case-sensitive phrase and finding it.
* cargo fmt
* Use a tmp DB for Serai in Docker
* Remove panic on slash
As we move towards protonet, this can happen (if a node goes offline),
yet it happening brings down the entire net right now.
* Add log::error on slash
* created shared volume between containers
* Complete the sh scripts
* Pass in the genesis time to Substrate
* Correct block announcements
They were announced, yet not marked best.
* Correct pupulate_end_time
It was used as inclusive yet didn't work inclusively.
* Correct gossip channel jumping when a block is synced via Substrate
* Use a looser check in import_future
This triggered so it needs to be accordingly relaxed.
* Correct race conditions between add_block and step
Also corrects a <= to <.
* Update cargo deny
* rename genesis-service to genesis
* Update Cargo.lock
* Correct runtime Cargo.toml whitespace
* Correct typo
* Document recheck
* Misc lints
* Fix prev commit
* Resolve low-hanging review comments
* Mark genesis/entry-dev.sh as executable
* Prevent a commit from including the same signature multiple times
Yanks tendermint-machine 0.1.0 accordingly.
* Update to latest nightly clippy
* Improve documentation
* Use clearer variable names
* Add log statements
* Pair more log statements
* Clean TendermintAuthority::authority as possible
Merges it into new. It has way too many arguments, yet there's no clear path at
consolidation there, unfortunately.
Additionally provides better scoping within itself.
* Fix #158
Doesn't use lock_import_and_run for reasons commented (lack of async).
* Rename guard to lock
* Have the devnet use the current time as the genesis
Possible since it's only a single node, not requiring synchronization.
* Fix gossiping
I really don't know what side effect this avoids and I can't say I care at this
point.
* Misc lints
Co-authored-by: vrx00 <vrx00@proton.me>
Co-authored-by: TheArchitect108 <TheArchitect108@protonmail.com>
This commit is contained in:
48
substrate/tendermint/client/Cargo.toml
Normal file
48
substrate/tendermint/client/Cargo.toml
Normal file
@@ -0,0 +1,48 @@
|
||||
[package]
|
||||
name = "sc-tendermint"
|
||||
version = "0.1.0"
|
||||
description = "Tendermint client for Substrate"
|
||||
license = "AGPL-3.0-only"
|
||||
repository = "https://github.com/serai-dex/serai/tree/develop/substrate/tendermint/client"
|
||||
authors = ["Luke Parker <lukeparker5132@gmail.com>"]
|
||||
edition = "2021"
|
||||
publish = false
|
||||
|
||||
[package.metadata.docs.rs]
|
||||
all-features = true
|
||||
rustdoc-args = ["--cfg", "docsrs"]
|
||||
|
||||
[dependencies]
|
||||
async-trait = "0.1"
|
||||
|
||||
hex = "0.4"
|
||||
log = "0.4"
|
||||
|
||||
futures = "0.3"
|
||||
tokio = { version = "1", features = ["sync", "rt"] }
|
||||
|
||||
sp-core = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-application-crypto = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-keystore = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-inherents = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-staking = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-blockchain = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-runtime = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-api = { git = "https://github.com/serai-dex/substrate" }
|
||||
sp-consensus = { git = "https://github.com/serai-dex/substrate" }
|
||||
|
||||
sp-tendermint = { path = "../primitives" }
|
||||
|
||||
sc-transaction-pool = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-executor = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-network-common = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-network = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-network-gossip = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-service = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-client-api = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-block-builder = { git = "https://github.com/serai-dex/substrate" }
|
||||
sc-consensus = { git = "https://github.com/serai-dex/substrate" }
|
||||
|
||||
substrate-prometheus-endpoint = { git = "https://github.com/serai-dex/substrate" }
|
||||
|
||||
tendermint-machine = { path = "../machine", features = ["substrate"] }
|
||||
15
substrate/tendermint/client/LICENSE
Normal file
15
substrate/tendermint/client/LICENSE
Normal file
@@ -0,0 +1,15 @@
|
||||
AGPL-3.0-only license
|
||||
|
||||
Copyright (c) 2022 Luke Parker
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU Affero General Public License Version 3 as
|
||||
published by the Free Software Foundation.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU Affero General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU Affero General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
67
substrate/tendermint/client/src/authority/gossip.rs
Normal file
67
substrate/tendermint/client/src/authority/gossip.rs
Normal file
@@ -0,0 +1,67 @@
|
||||
use std::sync::{Arc, RwLock};
|
||||
|
||||
use sp_core::Decode;
|
||||
use sp_runtime::traits::{Hash, Header, Block};
|
||||
|
||||
use sc_network::PeerId;
|
||||
use sc_network_gossip::{Validator, ValidatorContext, ValidationResult};
|
||||
|
||||
use tendermint_machine::{ext::SignatureScheme, SignedMessage};
|
||||
|
||||
use crate::{TendermintValidator, validators::TendermintValidators};
|
||||
|
||||
#[derive(Clone)]
|
||||
pub(crate) struct TendermintGossip<T: TendermintValidator> {
|
||||
number: Arc<RwLock<u64>>,
|
||||
signature_scheme: TendermintValidators<T>,
|
||||
}
|
||||
|
||||
impl<T: TendermintValidator> TendermintGossip<T> {
|
||||
pub(crate) fn new(number: Arc<RwLock<u64>>, signature_scheme: TendermintValidators<T>) -> Self {
|
||||
TendermintGossip { number, signature_scheme }
|
||||
}
|
||||
|
||||
pub(crate) fn topic(number: u64) -> <T::Block as Block>::Hash {
|
||||
<<<T::Block as Block>::Header as Header>::Hashing as Hash>::hash(
|
||||
&[b"Tendermint Block Topic".as_ref(), &number.to_le_bytes()].concat(),
|
||||
)
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: TendermintValidator> Validator<T::Block> for TendermintGossip<T> {
|
||||
fn validate(
|
||||
&self,
|
||||
_: &mut dyn ValidatorContext<T::Block>,
|
||||
_: &PeerId,
|
||||
data: &[u8],
|
||||
) -> ValidationResult<<T::Block as Block>::Hash> {
|
||||
let msg = match SignedMessage::<
|
||||
u16,
|
||||
T::Block,
|
||||
<TendermintValidators<T> as SignatureScheme>::Signature,
|
||||
>::decode(&mut &*data)
|
||||
{
|
||||
Ok(msg) => msg,
|
||||
Err(_) => return ValidationResult::Discard,
|
||||
};
|
||||
|
||||
if msg.block().0 < *self.number.read().unwrap() {
|
||||
return ValidationResult::Discard;
|
||||
}
|
||||
|
||||
// Verify the signature here so we don't carry invalid messages in our gossip layer
|
||||
// This will cause double verification of the signature, yet that's a minimal cost
|
||||
if !msg.verify_signature(&self.signature_scheme) {
|
||||
return ValidationResult::Discard;
|
||||
}
|
||||
|
||||
ValidationResult::ProcessAndKeep(Self::topic(msg.block().0))
|
||||
}
|
||||
|
||||
fn message_expired<'a>(
|
||||
&'a self,
|
||||
) -> Box<dyn FnMut(<T::Block as Block>::Hash, &[u8]) -> bool + 'a> {
|
||||
let number = self.number.clone();
|
||||
Box::new(move |topic, _| topic != Self::topic(*number.read().unwrap()))
|
||||
}
|
||||
}
|
||||
72
substrate/tendermint/client/src/authority/import_future.rs
Normal file
72
substrate/tendermint/client/src/authority/import_future.rs
Normal file
@@ -0,0 +1,72 @@
|
||||
use std::{
|
||||
pin::Pin,
|
||||
sync::RwLock,
|
||||
task::{Poll, Context},
|
||||
future::Future,
|
||||
};
|
||||
|
||||
use sp_runtime::traits::{Header, Block};
|
||||
|
||||
use sp_consensus::Error;
|
||||
use sc_consensus::{BlockImportStatus, BlockImportError, Link};
|
||||
|
||||
use sc_service::ImportQueue;
|
||||
|
||||
use tendermint_machine::ext::BlockError;
|
||||
|
||||
use crate::TendermintImportQueue;
|
||||
|
||||
// Custom helpers for ImportQueue in order to obtain the result of a block's importing
|
||||
struct ValidateLink<B: Block>(Option<(B::Hash, Result<(), BlockError>)>);
|
||||
impl<B: Block> Link<B> for ValidateLink<B> {
|
||||
fn blocks_processed(
|
||||
&mut self,
|
||||
imported: usize,
|
||||
count: usize,
|
||||
mut results: Vec<(
|
||||
Result<BlockImportStatus<<B::Header as Header>::Number>, BlockImportError>,
|
||||
B::Hash,
|
||||
)>,
|
||||
) {
|
||||
assert!(imported <= 1);
|
||||
assert_eq!(count, 1);
|
||||
self.0 = Some((
|
||||
results[0].1,
|
||||
match results.swap_remove(0).0 {
|
||||
Ok(_) => Ok(()),
|
||||
Err(BlockImportError::Other(Error::Other(err))) => Err(
|
||||
err.downcast::<BlockError>().map(|boxed| *boxed.as_ref()).unwrap_or(BlockError::Fatal),
|
||||
),
|
||||
_ => Err(BlockError::Fatal),
|
||||
},
|
||||
));
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) struct ImportFuture<'a, B: Block, T: Send>(
|
||||
B::Hash,
|
||||
RwLock<&'a mut TendermintImportQueue<B, T>>,
|
||||
);
|
||||
impl<'a, B: Block, T: Send> ImportFuture<'a, B, T> {
|
||||
pub(crate) fn new(
|
||||
hash: B::Hash,
|
||||
queue: &'a mut TendermintImportQueue<B, T>,
|
||||
) -> ImportFuture<B, T> {
|
||||
ImportFuture(hash, RwLock::new(queue))
|
||||
}
|
||||
}
|
||||
|
||||
impl<'a, B: Block, T: Send> Future for ImportFuture<'a, B, T> {
|
||||
type Output = Result<(), BlockError>;
|
||||
|
||||
fn poll(self: Pin<&mut Self>, ctx: &mut Context<'_>) -> Poll<Self::Output> {
|
||||
let mut link = ValidateLink(None);
|
||||
self.1.write().unwrap().poll_actions(ctx, &mut link);
|
||||
if let Some(res) = link.0 {
|
||||
assert_eq!(res.0, self.0);
|
||||
Poll::Ready(res.1)
|
||||
} else {
|
||||
Poll::Pending
|
||||
}
|
||||
}
|
||||
}
|
||||
494
substrate/tendermint/client/src/authority/mod.rs
Normal file
494
substrate/tendermint/client/src/authority/mod.rs
Normal file
@@ -0,0 +1,494 @@
|
||||
use std::{
|
||||
sync::{Arc, RwLock},
|
||||
time::{UNIX_EPOCH, SystemTime, Duration},
|
||||
collections::HashSet,
|
||||
};
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
use log::{debug, warn, error};
|
||||
|
||||
use futures::{
|
||||
SinkExt, StreamExt,
|
||||
lock::Mutex,
|
||||
channel::mpsc::{self, UnboundedSender},
|
||||
};
|
||||
|
||||
use sp_core::{Encode, Decode, traits::SpawnEssentialNamed};
|
||||
use sp_keystore::CryptoStore;
|
||||
use sp_runtime::{
|
||||
traits::{Header, Block},
|
||||
Digest,
|
||||
};
|
||||
use sp_blockchain::HeaderBackend;
|
||||
use sp_api::BlockId;
|
||||
|
||||
use sp_consensus::{Error, BlockOrigin, Proposer, Environment};
|
||||
use sc_consensus::import_queue::IncomingBlock;
|
||||
|
||||
use sc_service::ImportQueue;
|
||||
use sc_client_api::{BlockBackend, Finalizer, BlockchainEvents};
|
||||
use sc_network::{ProtocolName, NetworkBlock};
|
||||
use sc_network_gossip::GossipEngine;
|
||||
|
||||
use substrate_prometheus_endpoint::Registry;
|
||||
|
||||
use tendermint_machine::{
|
||||
ext::{BlockError, BlockNumber, Commit, SignatureScheme, Network},
|
||||
SignedMessage, TendermintMachine, TendermintHandle,
|
||||
};
|
||||
|
||||
use crate::{
|
||||
CONSENSUS_ID, TendermintValidator,
|
||||
validators::{TendermintSigner, TendermintValidators},
|
||||
tendermint::TendermintImport,
|
||||
};
|
||||
|
||||
mod gossip;
|
||||
use gossip::TendermintGossip;
|
||||
|
||||
mod import_future;
|
||||
use import_future::ImportFuture;
|
||||
|
||||
// Data for an active validator
|
||||
// This is distinct as even when we aren't an authority, we still create stubbed Authority objects
|
||||
// as it's only Authority which implements tendermint_machine::ext::Network. Network has
|
||||
// verify_commit provided, and even non-authorities have to verify commits
|
||||
struct ActiveAuthority<T: TendermintValidator> {
|
||||
signer: TendermintSigner<T>,
|
||||
|
||||
// The number of the Block we're working on producing
|
||||
block_in_progress: Arc<RwLock<u64>>,
|
||||
// Notification channel for when we start on a new block
|
||||
new_block_event: UnboundedSender<()>,
|
||||
// Outgoing message queue, placed here as the GossipEngine itself can't be
|
||||
gossip: UnboundedSender<
|
||||
SignedMessage<u16, T::Block, <TendermintValidators<T> as SignatureScheme>::Signature>,
|
||||
>,
|
||||
|
||||
// Block producer
|
||||
env: Arc<Mutex<T::Environment>>,
|
||||
announce: T::Network,
|
||||
}
|
||||
|
||||
/// Tendermint Authority. Participates in the block proposal and voting process.
|
||||
pub struct TendermintAuthority<T: TendermintValidator> {
|
||||
import: TendermintImport<T>,
|
||||
active: Option<ActiveAuthority<T>>,
|
||||
}
|
||||
|
||||
// Get a block to propose after the specified header
|
||||
// If stub is true, no time will be spent adding transactions to it (beyond what's required),
|
||||
// making it as minimal as possible (a stub)
|
||||
// This is so we can create proposals when syncing, respecting tendermint-machine's API boundaries,
|
||||
// without spending the entire block processing time trying to include transactions (since we know
|
||||
// our proposal is meaningless and we'll just be syncing a new block anyways)
|
||||
async fn get_proposal<T: TendermintValidator>(
|
||||
env: &Arc<Mutex<T::Environment>>,
|
||||
import: &TendermintImport<T>,
|
||||
header: &<T::Block as Block>::Header,
|
||||
stub: bool,
|
||||
) -> T::Block {
|
||||
let proposer =
|
||||
env.lock().await.init(header).await.expect("Failed to create a proposer for the new block");
|
||||
|
||||
proposer
|
||||
.propose(
|
||||
import.inherent_data(*header.parent_hash()).await,
|
||||
Digest::default(),
|
||||
if stub {
|
||||
Duration::ZERO
|
||||
} else {
|
||||
// The first processing time is to build the block
|
||||
// The second is for it to be downloaded (assumes a block won't take longer to download
|
||||
// than it'll take to process)
|
||||
// The third is for it to actually be processed
|
||||
Duration::from_secs((T::BLOCK_PROCESSING_TIME_IN_SECONDS / 3).into())
|
||||
},
|
||||
Some(T::PROPOSED_BLOCK_SIZE_LIMIT),
|
||||
)
|
||||
.await
|
||||
.expect("Failed to crate a new block proposal")
|
||||
.block
|
||||
}
|
||||
|
||||
impl<T: TendermintValidator> TendermintAuthority<T> {
|
||||
// Authority which is capable of verifying commits
|
||||
pub(crate) fn stub(import: TendermintImport<T>) -> Self {
|
||||
Self { import, active: None }
|
||||
}
|
||||
|
||||
async fn get_proposal(&self, header: &<T::Block as Block>::Header) -> T::Block {
|
||||
get_proposal(&self.active.as_ref().unwrap().env, &self.import, header, false).await
|
||||
}
|
||||
|
||||
/// Create and run a new Tendermint Authority, proposing and voting on blocks.
|
||||
/// This should be spawned on a task as it will not return until the P2P stack shuts down.
|
||||
#[allow(clippy::too_many_arguments, clippy::new_ret_no_self)]
|
||||
pub async fn new(
|
||||
genesis: SystemTime,
|
||||
protocol: ProtocolName,
|
||||
import: TendermintImport<T>,
|
||||
keys: Arc<dyn CryptoStore>,
|
||||
providers: T::CIDP,
|
||||
spawner: impl SpawnEssentialNamed,
|
||||
env: T::Environment,
|
||||
network: T::Network,
|
||||
registry: Option<&Registry>,
|
||||
) {
|
||||
// This should only have a single value, yet a bounded channel with a capacity of 1 would cause
|
||||
// a firm bound. It's not worth having a backlog crash the node since we aren't constrained
|
||||
let (new_block_event_send, mut new_block_event_recv) = mpsc::unbounded();
|
||||
let (msg_send, mut msg_recv) = mpsc::unbounded();
|
||||
|
||||
// Move the env into an Arc
|
||||
let env = Arc::new(Mutex::new(env));
|
||||
|
||||
// Scoped so the temporary variables used here don't leak
|
||||
let (block_in_progress, mut gossip, TendermintHandle { mut step, mut messages, machine }) = {
|
||||
// Get the info necessary to spawn the machine
|
||||
let info = import.client.info();
|
||||
|
||||
// Header::Number: TryInto<u64> doesn't implement Debug and can't be unwrapped
|
||||
let last_block: u64 = match info.finalized_number.try_into() {
|
||||
Ok(best) => best,
|
||||
Err(_) => panic!("BlockNumber exceeded u64"),
|
||||
};
|
||||
let last_hash = info.finalized_hash;
|
||||
|
||||
let last_time = {
|
||||
// Convert into a Unix timestamp
|
||||
let genesis = genesis.duration_since(UNIX_EPOCH).unwrap().as_secs();
|
||||
|
||||
// Get the last block's time by grabbing its commit and reading the time from that
|
||||
Commit::<TendermintValidators<T>>::decode(
|
||||
&mut import
|
||||
.client
|
||||
.justifications(last_hash)
|
||||
.unwrap()
|
||||
.map(|justifications| justifications.get(CONSENSUS_ID).cloned().unwrap())
|
||||
.unwrap_or_default()
|
||||
.as_ref(),
|
||||
)
|
||||
.map(|commit| commit.end_time)
|
||||
// The commit provides the time its block ended at
|
||||
// The genesis time is when the network starts
|
||||
// Accordingly, the end of the genesis block is a block time after the genesis time
|
||||
.unwrap_or_else(|_| genesis + u64::from(Self::block_time()))
|
||||
};
|
||||
|
||||
let next_block = last_block + 1;
|
||||
// Shared references between us and the Tendermint machine (and its actions via its Network
|
||||
// trait)
|
||||
let block_in_progress = Arc::new(RwLock::new(next_block));
|
||||
|
||||
// Write the providers into the import so it can verify inherents
|
||||
*import.providers.write().await = Some(providers);
|
||||
|
||||
let authority = Self {
|
||||
import: import.clone(),
|
||||
active: Some(ActiveAuthority {
|
||||
signer: TendermintSigner(keys, import.validators.clone()),
|
||||
|
||||
block_in_progress: block_in_progress.clone(),
|
||||
new_block_event: new_block_event_send,
|
||||
gossip: msg_send,
|
||||
|
||||
env: env.clone(),
|
||||
announce: network.clone(),
|
||||
}),
|
||||
};
|
||||
|
||||
// Get our first proposal
|
||||
let proposal = authority
|
||||
.get_proposal(&import.client.header(BlockId::Hash(last_hash)).unwrap().unwrap())
|
||||
.await;
|
||||
|
||||
// Create the gossip network
|
||||
// This has to be spawning the machine, else gossip fails for some reason
|
||||
let gossip = GossipEngine::new(
|
||||
network,
|
||||
protocol,
|
||||
Arc::new(TendermintGossip::new(block_in_progress.clone(), import.validators.clone())),
|
||||
registry,
|
||||
);
|
||||
|
||||
(
|
||||
block_in_progress,
|
||||
gossip,
|
||||
TendermintMachine::new(authority, BlockNumber(last_block), last_time, proposal).await,
|
||||
)
|
||||
};
|
||||
spawner.spawn_essential("machine", Some("tendermint"), Box::pin(machine.run()));
|
||||
|
||||
// Start receiving messages about the Tendermint process for this block
|
||||
let mut gossip_recv =
|
||||
gossip.messages_for(TendermintGossip::<T>::topic(*block_in_progress.read().unwrap()));
|
||||
|
||||
// Get finality events from Substrate
|
||||
let mut finality = import.client.finality_notification_stream();
|
||||
|
||||
loop {
|
||||
futures::select_biased! {
|
||||
// GossipEngine closed down
|
||||
_ = gossip => {
|
||||
debug!(
|
||||
target: "tendermint",
|
||||
"GossipEngine shut down. {}",
|
||||
"Is the node shutting down?"
|
||||
);
|
||||
break;
|
||||
},
|
||||
|
||||
// Synced a block from the network
|
||||
notif = finality.next() => {
|
||||
if let Some(notif) = notif {
|
||||
let number = match (*notif.header.number()).try_into() {
|
||||
Ok(number) => number,
|
||||
Err(_) => panic!("BlockNumber exceeded u64"),
|
||||
};
|
||||
|
||||
// There's a race condition between the machine add_block and this
|
||||
// Both wait for a write lock on this ref and don't release it until after updating it
|
||||
// accordingly
|
||||
{
|
||||
let mut block_in_progress = block_in_progress.write().unwrap();
|
||||
if number < *block_in_progress {
|
||||
continue;
|
||||
}
|
||||
let next_block = number + 1;
|
||||
*block_in_progress = next_block;
|
||||
gossip_recv = gossip.messages_for(TendermintGossip::<T>::topic(next_block));
|
||||
}
|
||||
|
||||
let justifications = import.client.justifications(notif.hash).unwrap().unwrap();
|
||||
step.send((
|
||||
BlockNumber(number),
|
||||
Commit::decode(&mut justifications.get(CONSENSUS_ID).unwrap().as_ref()).unwrap(),
|
||||
// This will fail if syncing occurs radically faster than machine stepping takes
|
||||
// TODO: Set true when initial syncing
|
||||
get_proposal(&env, &import, ¬if.header, false).await
|
||||
)).await.unwrap();
|
||||
} else {
|
||||
debug!(
|
||||
target: "tendermint",
|
||||
"Finality notification stream closed down. {}",
|
||||
"Is the node shutting down?"
|
||||
);
|
||||
break;
|
||||
}
|
||||
},
|
||||
|
||||
// Machine accomplished a new block
|
||||
new_block = new_block_event_recv.next() => {
|
||||
if new_block.is_some() {
|
||||
gossip_recv = gossip.messages_for(
|
||||
TendermintGossip::<T>::topic(*block_in_progress.read().unwrap())
|
||||
);
|
||||
} else {
|
||||
debug!(
|
||||
target: "tendermint",
|
||||
"Block notification stream shut down. {}",
|
||||
"Is the node shutting down?"
|
||||
);
|
||||
break;
|
||||
}
|
||||
},
|
||||
|
||||
// Message to broadcast
|
||||
msg = msg_recv.next() => {
|
||||
if let Some(msg) = msg {
|
||||
let topic = TendermintGossip::<T>::topic(msg.block().0);
|
||||
gossip.gossip_message(topic, msg.encode(), false);
|
||||
} else {
|
||||
debug!(
|
||||
target: "tendermint",
|
||||
"Machine's message channel shut down. {}",
|
||||
"Is the node shutting down?"
|
||||
);
|
||||
break;
|
||||
}
|
||||
},
|
||||
|
||||
// Received a message
|
||||
msg = gossip_recv.next() => {
|
||||
if let Some(msg) = msg {
|
||||
messages.send(
|
||||
match SignedMessage::decode(&mut msg.message.as_ref()) {
|
||||
Ok(msg) => msg,
|
||||
Err(e) => {
|
||||
// This is guaranteed to be valid thanks to to the gossip validator, assuming
|
||||
// that pipeline is correct. This doesn't panic as a hedge
|
||||
error!(target: "tendermint", "Couldn't decode valid message: {}", e);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
).await.unwrap();
|
||||
} else {
|
||||
debug!(
|
||||
target: "tendermint",
|
||||
"Gossip channel shut down. {}",
|
||||
"Is the node shutting down?"
|
||||
);
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: TendermintValidator> Network for TendermintAuthority<T> {
|
||||
type ValidatorId = u16;
|
||||
type SignatureScheme = TendermintValidators<T>;
|
||||
type Weights = TendermintValidators<T>;
|
||||
type Block = T::Block;
|
||||
|
||||
const BLOCK_PROCESSING_TIME: u32 = T::BLOCK_PROCESSING_TIME_IN_SECONDS;
|
||||
const LATENCY_TIME: u32 = T::LATENCY_TIME_IN_SECONDS;
|
||||
|
||||
fn signer(&self) -> TendermintSigner<T> {
|
||||
self.active.as_ref().unwrap().signer.clone()
|
||||
}
|
||||
|
||||
fn signature_scheme(&self) -> TendermintValidators<T> {
|
||||
self.import.validators.clone()
|
||||
}
|
||||
|
||||
fn weights(&self) -> TendermintValidators<T> {
|
||||
self.import.validators.clone()
|
||||
}
|
||||
|
||||
async fn broadcast(
|
||||
&mut self,
|
||||
msg: SignedMessage<u16, Self::Block, <TendermintValidators<T> as SignatureScheme>::Signature>,
|
||||
) {
|
||||
if self.active.as_mut().unwrap().gossip.unbounded_send(msg).is_err() {
|
||||
warn!(
|
||||
target: "tendermint",
|
||||
"Attempted to broadcast a message except the gossip channel is closed. {}",
|
||||
"Is the node shutting down?"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
async fn slash(&mut self, validator: u16) {
|
||||
// TODO
|
||||
error!("slashing {}, if this is a local network, this shouldn't happen", validator);
|
||||
}
|
||||
|
||||
// The Tendermint machine will call add_block for any block which is committed to, regardless of
|
||||
// validity. To determine validity, it expects a validate function, which Substrate doesn't
|
||||
// directly offer, and an add function. In order to comply with Serai's modified view of inherent
|
||||
// transactions, validate MUST check inherents, yet add_block must not.
|
||||
//
|
||||
// In order to acquire a validate function, any block proposed by a legitimate proposer is
|
||||
// imported. This performs full validation and makes the block available as a tip. While this
|
||||
// would be incredibly unsafe thanks to the unchecked inherents, it's defined as a tip with less
|
||||
// work, despite being a child of some parent. This means it won't be moved to nor operated on by
|
||||
// the node.
|
||||
//
|
||||
// When Tendermint completes, the block is finalized, setting it as the tip regardless of work.
|
||||
async fn validate(&mut self, block: &T::Block) -> Result<(), BlockError> {
|
||||
let hash = block.hash();
|
||||
let (header, body) = block.clone().deconstruct();
|
||||
let parent = *header.parent_hash();
|
||||
let number = *header.number();
|
||||
|
||||
// Can happen when we sync a block while also acting as a validator
|
||||
if number <= self.import.client.info().best_number {
|
||||
debug!(target: "tendermint", "Machine proposed a block for a slot we've already synced");
|
||||
Err(BlockError::Temporal)?;
|
||||
}
|
||||
|
||||
let mut queue_write = self.import.queue.write().await;
|
||||
*self.import.importing_block.write().unwrap() = Some(hash);
|
||||
|
||||
queue_write.as_mut().unwrap().import_blocks(
|
||||
BlockOrigin::ConsensusBroadcast, // TODO: Use BlockOrigin::Own when it's our block
|
||||
vec![IncomingBlock {
|
||||
hash,
|
||||
header: Some(header),
|
||||
body: Some(body),
|
||||
indexed_body: None,
|
||||
justifications: None,
|
||||
origin: None, // TODO
|
||||
allow_missing_state: false,
|
||||
skip_execution: false,
|
||||
import_existing: self.import.recheck.read().unwrap().contains(&hash),
|
||||
state: None,
|
||||
}],
|
||||
);
|
||||
|
||||
ImportFuture::new(hash, queue_write.as_mut().unwrap()).await?;
|
||||
|
||||
// Sanity checks that a child block can have less work than its parent
|
||||
{
|
||||
let info = self.import.client.info();
|
||||
assert_eq!(info.best_hash, parent);
|
||||
assert_eq!(info.finalized_hash, parent);
|
||||
assert_eq!(info.best_number, number - 1u8.into());
|
||||
assert_eq!(info.finalized_number, number - 1u8.into());
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn add_block(
|
||||
&mut self,
|
||||
block: T::Block,
|
||||
commit: Commit<TendermintValidators<T>>,
|
||||
) -> T::Block {
|
||||
// Prevent import_block from being called while we run
|
||||
let _lock = self.import.sync_lock.lock().await;
|
||||
|
||||
// Check if we already imported this externally
|
||||
if self.import.client.justifications(block.hash()).unwrap().is_some() {
|
||||
debug!(target: "tendermint", "Machine produced a commit after we already synced it");
|
||||
} else {
|
||||
let hash = block.hash();
|
||||
let justification = (CONSENSUS_ID, commit.encode());
|
||||
debug_assert!(self.import.verify_justification(hash, &justification).is_ok());
|
||||
|
||||
let raw_number = *block.header().number();
|
||||
let number: u64 = match raw_number.try_into() {
|
||||
Ok(number) => number,
|
||||
Err(_) => panic!("BlockNumber exceeded u64"),
|
||||
};
|
||||
|
||||
let active = self.active.as_mut().unwrap();
|
||||
let mut block_in_progress = active.block_in_progress.write().unwrap();
|
||||
// This will hold true unless we received, and handled, a notification for the block before
|
||||
// its justification was made available
|
||||
debug_assert_eq!(number, *block_in_progress);
|
||||
|
||||
// Finalize the block
|
||||
self
|
||||
.import
|
||||
.client
|
||||
.finalize_block(hash, Some(justification), true)
|
||||
.map_err(|_| Error::InvalidJustification)
|
||||
.unwrap();
|
||||
|
||||
// Tell the loop we received a block and to move to the next
|
||||
*block_in_progress = number + 1;
|
||||
if active.new_block_event.unbounded_send(()).is_err() {
|
||||
warn!(
|
||||
target: "tendermint",
|
||||
"Attempted to send a new number to the gossip handler except it's closed. {}",
|
||||
"Is the node shutting down?"
|
||||
);
|
||||
}
|
||||
|
||||
// Announce the block to the network so new clients can sync properly
|
||||
active.announce.announce_block(hash, None);
|
||||
active.announce.new_best_block_imported(hash, raw_number);
|
||||
}
|
||||
|
||||
// Clear any blocks for the previous slot which we were willing to recheck
|
||||
*self.import.recheck.write().unwrap() = HashSet::new();
|
||||
|
||||
self.get_proposal(block.header()).await
|
||||
}
|
||||
}
|
||||
182
substrate/tendermint/client/src/block_import.rs
Normal file
182
substrate/tendermint/client/src/block_import.rs
Normal file
@@ -0,0 +1,182 @@
|
||||
use std::{marker::PhantomData, sync::Arc, collections::HashMap};
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
use sp_api::BlockId;
|
||||
use sp_runtime::traits::{Header, Block};
|
||||
use sp_blockchain::{BlockStatus, HeaderBackend, Backend as BlockchainBackend};
|
||||
use sp_consensus::{Error, CacheKeyId, BlockOrigin, SelectChain};
|
||||
|
||||
use sc_consensus::{BlockCheckParams, BlockImportParams, ImportResult, BlockImport, Verifier};
|
||||
|
||||
use sc_client_api::{Backend, BlockBackend};
|
||||
|
||||
use crate::{TendermintValidator, tendermint::TendermintImport};
|
||||
|
||||
impl<T: TendermintValidator> TendermintImport<T> {
|
||||
fn check_already_in_chain(&self, hash: <T::Block as Block>::Hash) -> bool {
|
||||
let id = BlockId::Hash(hash);
|
||||
// If it's in chain, with justifications, return it's already on chain
|
||||
// If it's in chain, without justifications, continue the block import process to import its
|
||||
// justifications
|
||||
// This can be triggered if the validators add a block, without justifications, yet the p2p
|
||||
// process then broadcasts it with its justifications
|
||||
(self.client.status(id).unwrap() == BlockStatus::InChain) &&
|
||||
self.client.justifications(hash).unwrap().is_some()
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: TendermintValidator> BlockImport<T::Block> for TendermintImport<T>
|
||||
where
|
||||
Arc<T::Client>: BlockImport<T::Block, Transaction = T::BackendTransaction>,
|
||||
<Arc<T::Client> as BlockImport<T::Block>>::Error: Into<Error>,
|
||||
{
|
||||
type Error = Error;
|
||||
type Transaction = T::BackendTransaction;
|
||||
|
||||
// TODO: Is there a DoS where you send a block without justifications, causing it to error,
|
||||
// yet adding it to the blacklist in the process preventing further syncing?
|
||||
async fn check_block(
|
||||
&mut self,
|
||||
mut block: BlockCheckParams<T::Block>,
|
||||
) -> Result<ImportResult, Self::Error> {
|
||||
if self.check_already_in_chain(block.hash) {
|
||||
return Ok(ImportResult::AlreadyInChain);
|
||||
}
|
||||
self.verify_order(block.parent_hash, block.number)?;
|
||||
|
||||
// Does not verify origin here as origin only applies to unfinalized blocks
|
||||
// We don't have context on if this block has justifications or not
|
||||
|
||||
block.allow_missing_state = false;
|
||||
block.allow_missing_parent = false;
|
||||
|
||||
self.client.check_block(block).await.map_err(Into::into)
|
||||
}
|
||||
|
||||
async fn import_block(
|
||||
&mut self,
|
||||
mut block: BlockImportParams<T::Block, Self::Transaction>,
|
||||
new_cache: HashMap<CacheKeyId, Vec<u8>>,
|
||||
) -> Result<ImportResult, Self::Error> {
|
||||
// Don't allow multiple blocks to be imported at once
|
||||
let _lock = self.sync_lock.lock().await;
|
||||
|
||||
if self.check_already_in_chain(block.header.hash()) {
|
||||
return Ok(ImportResult::AlreadyInChain);
|
||||
}
|
||||
|
||||
self.check(&mut block).await?;
|
||||
self.client.import_block(block, new_cache).await.map_err(Into::into)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: TendermintValidator> Verifier<T::Block> for TendermintImport<T>
|
||||
where
|
||||
Arc<T::Client>: BlockImport<T::Block, Transaction = T::BackendTransaction>,
|
||||
<Arc<T::Client> as BlockImport<T::Block>>::Error: Into<Error>,
|
||||
{
|
||||
async fn verify(
|
||||
&mut self,
|
||||
mut block: BlockImportParams<T::Block, ()>,
|
||||
) -> Result<(BlockImportParams<T::Block, ()>, Option<Vec<(CacheKeyId, Vec<u8>)>>), String> {
|
||||
block.origin = match block.origin {
|
||||
BlockOrigin::Genesis => BlockOrigin::Genesis,
|
||||
BlockOrigin::NetworkBroadcast => BlockOrigin::NetworkBroadcast,
|
||||
|
||||
// Re-map NetworkInitialSync to NetworkBroadcast so it still triggers notifications
|
||||
// Tendermint will listen to the finality stream. If we sync a block we're running a machine
|
||||
// for, it'll force the machine to move ahead. We can only do that if there actually are
|
||||
// notifications
|
||||
//
|
||||
// Then Serai also runs data indexing code based on block addition, so ensuring it always
|
||||
// emits events ensures we always perform our necessary indexing (albeit with a race
|
||||
// condition since Substrate will eventually prune the block's state, potentially before
|
||||
// indexing finishes when syncing)
|
||||
//
|
||||
// The alternative to this would be editing Substrate directly, which would be a lot less
|
||||
// fragile, manually triggering the notifications (which may be possible with code intended
|
||||
// for testing), writing our own notification system, or implementing lock_import_and_run
|
||||
// on our end, letting us directly set the notifications, so we're not beholden to when
|
||||
// Substrate decides to call notify_finalized
|
||||
//
|
||||
// lock_import_and_run unfortunately doesn't allow async code and generally isn't feasible to
|
||||
// work with though. We also couldn't use it to prevent Substrate from creating
|
||||
// notifications, so it only solves half the problem. We'd *still* have to keep this patch,
|
||||
// with all its fragility, unless we edit Substrate or move the entire block import flow here
|
||||
BlockOrigin::NetworkInitialSync => BlockOrigin::NetworkBroadcast,
|
||||
// Also re-map File so bootstraps also trigger notifications, enabling using bootstraps
|
||||
BlockOrigin::File => BlockOrigin::NetworkBroadcast,
|
||||
|
||||
// We do not want this block, which hasn't been confirmed, to be broadcast over the net
|
||||
// Substrate will generate notifications unless it's Genesis, which this isn't, InitialSync,
|
||||
// which changes telemetry behavior, or File, which is... close enough
|
||||
BlockOrigin::ConsensusBroadcast => BlockOrigin::File,
|
||||
BlockOrigin::Own => BlockOrigin::File,
|
||||
};
|
||||
|
||||
if self.check_already_in_chain(block.header.hash()) {
|
||||
return Ok((block, None));
|
||||
}
|
||||
|
||||
self.check(&mut block).await.map_err(|e| format!("{}", e))?;
|
||||
Ok((block, None))
|
||||
}
|
||||
}
|
||||
|
||||
/// Tendermint's Select Chain, where the best chain is defined as the most recently finalized
|
||||
/// block.
|
||||
///
|
||||
/// leaves panics on call due to not being applicable under Tendermint. Any provided answer would
|
||||
/// have conflicts best left unraised.
|
||||
//
|
||||
// SelectChain, while provided by Substrate and part of PartialComponents, isn't used by Substrate
|
||||
// It's common between various block-production/finality crates, yet Substrate as a system doesn't
|
||||
// rely on it, which is good, because its definition is explicitly incompatible with Tendermint
|
||||
//
|
||||
// leaves is supposed to return all leaves of the blockchain. While Tendermint maintains that view,
|
||||
// an honest node will only build on the most recently finalized block, so it is a 'leaf' despite
|
||||
// having descendants
|
||||
//
|
||||
// best_chain will always be this finalized block, yet Substrate explicitly defines it as one of
|
||||
// the above leaves, which this finalized block is explicitly not included in. Accordingly, we
|
||||
// can never provide a compatible decision
|
||||
//
|
||||
// Since PartialComponents expects it, an implementation which does its best is provided. It panics
|
||||
// if leaves is called, yet returns the finalized chain tip for best_chain, as that's intended to
|
||||
// be the header to build upon
|
||||
pub struct TendermintSelectChain<B: Block, Be: Backend<B>>(Arc<Be>, PhantomData<B>);
|
||||
|
||||
impl<B: Block, Be: Backend<B>> Clone for TendermintSelectChain<B, Be> {
|
||||
fn clone(&self) -> Self {
|
||||
TendermintSelectChain(self.0.clone(), PhantomData)
|
||||
}
|
||||
}
|
||||
|
||||
impl<B: Block, Be: Backend<B>> TendermintSelectChain<B, Be> {
|
||||
pub fn new(backend: Arc<Be>) -> TendermintSelectChain<B, Be> {
|
||||
TendermintSelectChain(backend, PhantomData)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<B: Block, Be: Backend<B>> SelectChain<B> for TendermintSelectChain<B, Be> {
|
||||
async fn leaves(&self) -> Result<Vec<B::Hash>, Error> {
|
||||
panic!("Substrate definition of leaves is incompatible with Tendermint")
|
||||
}
|
||||
|
||||
async fn best_chain(&self) -> Result<B::Header, Error> {
|
||||
Ok(
|
||||
self
|
||||
.0
|
||||
.blockchain()
|
||||
// There should always be a finalized block
|
||||
.header(BlockId::Hash(self.0.blockchain().last_finalized().unwrap()))
|
||||
// There should not be an error in retrieving it and since it's finalized, it should exist
|
||||
.unwrap()
|
||||
.unwrap(),
|
||||
)
|
||||
}
|
||||
}
|
||||
163
substrate/tendermint/client/src/lib.rs
Normal file
163
substrate/tendermint/client/src/lib.rs
Normal file
@@ -0,0 +1,163 @@
|
||||
use std::sync::Arc;
|
||||
|
||||
use sp_core::crypto::KeyTypeId;
|
||||
use sp_inherents::CreateInherentDataProviders;
|
||||
use sp_runtime::traits::{Header, Block};
|
||||
use sp_blockchain::HeaderBackend;
|
||||
use sp_api::{StateBackend, StateBackendFor, TransactionFor, ApiExt, ProvideRuntimeApi};
|
||||
use sp_consensus::{Error, Environment};
|
||||
|
||||
use sc_client_api::{BlockBackend, Backend, Finalizer, BlockchainEvents};
|
||||
use sc_block_builder::BlockBuilderApi;
|
||||
use sc_consensus::{BlockImport, BasicQueue};
|
||||
|
||||
use sc_network_common::config::NonDefaultSetConfig;
|
||||
use sc_network::{ProtocolName, NetworkBlock};
|
||||
use sc_network_gossip::Network;
|
||||
|
||||
use sp_tendermint::TendermintApi;
|
||||
|
||||
use substrate_prometheus_endpoint::Registry;
|
||||
|
||||
mod validators;
|
||||
|
||||
pub(crate) mod tendermint;
|
||||
pub use tendermint::TendermintImport;
|
||||
|
||||
mod block_import;
|
||||
pub use block_import::TendermintSelectChain;
|
||||
|
||||
pub(crate) mod authority;
|
||||
pub use authority::TendermintAuthority;
|
||||
|
||||
pub const CONSENSUS_ID: [u8; 4] = *b"tend";
|
||||
pub(crate) const KEY_TYPE_ID: KeyTypeId = KeyTypeId(CONSENSUS_ID);
|
||||
|
||||
const PROTOCOL_NAME: &str = "/tendermint/1";
|
||||
|
||||
pub fn protocol_name<Hash: AsRef<[u8]>>(genesis: Hash, fork: Option<&str>) -> ProtocolName {
|
||||
let mut name = format!("/{}", hex::encode(genesis.as_ref()));
|
||||
if let Some(fork) = fork {
|
||||
name += &format!("/{}", fork);
|
||||
}
|
||||
name += PROTOCOL_NAME;
|
||||
name.into()
|
||||
}
|
||||
|
||||
pub fn set_config(protocol: ProtocolName, block_size: u64) -> NonDefaultSetConfig {
|
||||
// The extra 512 bytes is for the additional data part of Tendermint
|
||||
// Even with BLS, that should just be 161 bytes in the worst case, for a perfect messaging scheme
|
||||
// While 256 bytes would suffice there, it's unknown if any LibP2P overhead exists nor if
|
||||
// anything here will be perfect. Considering this is miniscule compared to the block size, it's
|
||||
// better safe than sorry.
|
||||
let mut cfg = NonDefaultSetConfig::new(protocol, block_size + 512);
|
||||
cfg.allow_non_reserved(25, 25);
|
||||
cfg
|
||||
}
|
||||
|
||||
/// Trait consolidating all generics required by sc_tendermint for processing.
|
||||
pub trait TendermintClient: Send + Sync + 'static {
|
||||
const PROPOSED_BLOCK_SIZE_LIMIT: usize;
|
||||
const BLOCK_PROCESSING_TIME_IN_SECONDS: u32;
|
||||
const LATENCY_TIME_IN_SECONDS: u32;
|
||||
|
||||
type Block: Block;
|
||||
type Backend: Backend<Self::Block> + 'static;
|
||||
|
||||
/// TransactionFor<Client, Block>
|
||||
type BackendTransaction: Send + Sync + 'static;
|
||||
/// StateBackendFor<Client, Block>
|
||||
type StateBackend: StateBackend<
|
||||
<<Self::Block as Block>::Header as Header>::Hashing,
|
||||
Transaction = Self::BackendTransaction,
|
||||
>;
|
||||
// Client::Api
|
||||
type Api: ApiExt<Self::Block, StateBackend = Self::StateBackend>
|
||||
+ BlockBuilderApi<Self::Block>
|
||||
+ TendermintApi<Self::Block>;
|
||||
type Client: Send
|
||||
+ Sync
|
||||
+ HeaderBackend<Self::Block>
|
||||
+ BlockBackend<Self::Block>
|
||||
+ BlockImport<Self::Block, Transaction = Self::BackendTransaction>
|
||||
+ Finalizer<Self::Block, Self::Backend>
|
||||
+ BlockchainEvents<Self::Block>
|
||||
+ ProvideRuntimeApi<Self::Block, Api = Self::Api>
|
||||
+ 'static;
|
||||
}
|
||||
|
||||
/// Trait implementable on firm types to automatically provide a full TendermintClient impl.
|
||||
pub trait TendermintClientMinimal: Send + Sync + 'static {
|
||||
const PROPOSED_BLOCK_SIZE_LIMIT: usize;
|
||||
const BLOCK_PROCESSING_TIME_IN_SECONDS: u32;
|
||||
const LATENCY_TIME_IN_SECONDS: u32;
|
||||
|
||||
type Block: Block;
|
||||
type Backend: Backend<Self::Block> + 'static;
|
||||
type Api: ApiExt<Self::Block> + BlockBuilderApi<Self::Block> + TendermintApi<Self::Block>;
|
||||
type Client: Send
|
||||
+ Sync
|
||||
+ HeaderBackend<Self::Block>
|
||||
+ BlockBackend<Self::Block>
|
||||
+ BlockImport<Self::Block, Transaction = TransactionFor<Self::Client, Self::Block>>
|
||||
+ Finalizer<Self::Block, Self::Backend>
|
||||
+ BlockchainEvents<Self::Block>
|
||||
+ ProvideRuntimeApi<Self::Block, Api = Self::Api>
|
||||
+ 'static;
|
||||
}
|
||||
|
||||
impl<T: TendermintClientMinimal> TendermintClient for T
|
||||
where
|
||||
<T::Client as ProvideRuntimeApi<T::Block>>::Api:
|
||||
BlockBuilderApi<T::Block> + TendermintApi<T::Block>,
|
||||
TransactionFor<T::Client, T::Block>: Send + Sync + 'static,
|
||||
{
|
||||
const PROPOSED_BLOCK_SIZE_LIMIT: usize = T::PROPOSED_BLOCK_SIZE_LIMIT;
|
||||
const BLOCK_PROCESSING_TIME_IN_SECONDS: u32 = T::BLOCK_PROCESSING_TIME_IN_SECONDS;
|
||||
const LATENCY_TIME_IN_SECONDS: u32 = T::LATENCY_TIME_IN_SECONDS;
|
||||
|
||||
type Block = T::Block;
|
||||
type Backend = T::Backend;
|
||||
|
||||
type BackendTransaction = TransactionFor<T::Client, T::Block>;
|
||||
type StateBackend = StateBackendFor<T::Client, T::Block>;
|
||||
type Api = <T::Client as ProvideRuntimeApi<T::Block>>::Api;
|
||||
type Client = T::Client;
|
||||
}
|
||||
|
||||
/// Trait consolidating additional generics required by sc_tendermint for authoring.
|
||||
pub trait TendermintValidator: TendermintClient {
|
||||
type CIDP: CreateInherentDataProviders<Self::Block, ()> + 'static;
|
||||
type Environment: Send + Sync + Environment<Self::Block> + 'static;
|
||||
|
||||
type Network: Clone
|
||||
+ Send
|
||||
+ Sync
|
||||
+ Network<Self::Block>
|
||||
+ NetworkBlock<<Self::Block as Block>::Hash, <<Self::Block as Block>::Header as Header>::Number>
|
||||
+ 'static;
|
||||
}
|
||||
|
||||
pub type TendermintImportQueue<Block, Transaction> = BasicQueue<Block, Transaction>;
|
||||
|
||||
/// Create an import queue, additionally returning the Tendermint Import object iself, enabling
|
||||
/// creating an author later as well.
|
||||
pub fn import_queue<T: TendermintValidator>(
|
||||
spawner: &impl sp_core::traits::SpawnEssentialNamed,
|
||||
client: Arc<T::Client>,
|
||||
registry: Option<&Registry>,
|
||||
) -> (TendermintImport<T>, TendermintImportQueue<T::Block, T::BackendTransaction>)
|
||||
where
|
||||
Arc<T::Client>: BlockImport<T::Block, Transaction = T::BackendTransaction>,
|
||||
<Arc<T::Client> as BlockImport<T::Block>>::Error: Into<Error>,
|
||||
{
|
||||
let import = TendermintImport::<T>::new(client);
|
||||
|
||||
let boxed = Box::new(import.clone());
|
||||
// Use None for the justification importer since justifications always come with blocks
|
||||
// Therefore, they're never imported after the fact, which is what mandates an importer
|
||||
let queue = || BasicQueue::new(import.clone(), boxed.clone(), None, spawner, registry);
|
||||
|
||||
*futures::executor::block_on(import.queue.write()) = Some(queue());
|
||||
(import.clone(), queue())
|
||||
}
|
||||
247
substrate/tendermint/client/src/tendermint.rs
Normal file
247
substrate/tendermint/client/src/tendermint.rs
Normal file
@@ -0,0 +1,247 @@
|
||||
use std::{
|
||||
sync::{Arc, RwLock},
|
||||
collections::HashSet,
|
||||
};
|
||||
|
||||
use log::{debug, warn};
|
||||
|
||||
use tokio::sync::{Mutex, RwLock as AsyncRwLock};
|
||||
|
||||
use sp_core::Decode;
|
||||
use sp_runtime::{
|
||||
traits::{Header, Block},
|
||||
Justification,
|
||||
};
|
||||
use sp_inherents::{InherentData, InherentDataProvider, CreateInherentDataProviders};
|
||||
use sp_blockchain::HeaderBackend;
|
||||
use sp_api::{BlockId, ProvideRuntimeApi};
|
||||
|
||||
use sp_consensus::Error;
|
||||
use sc_consensus::{ForkChoiceStrategy, BlockImportParams};
|
||||
|
||||
use sc_block_builder::BlockBuilderApi;
|
||||
|
||||
use tendermint_machine::ext::{BlockError, Commit, Network};
|
||||
|
||||
use crate::{
|
||||
CONSENSUS_ID, TendermintClient, TendermintValidator, validators::TendermintValidators,
|
||||
TendermintImportQueue, authority::TendermintAuthority,
|
||||
};
|
||||
|
||||
type InstantiatedTendermintImportQueue<T> = TendermintImportQueue<
|
||||
<T as TendermintClient>::Block,
|
||||
<T as TendermintClient>::BackendTransaction,
|
||||
>;
|
||||
|
||||
/// Tendermint import handler.
|
||||
pub struct TendermintImport<T: TendermintValidator> {
|
||||
// Lock ensuring only one block is imported at a time
|
||||
pub(crate) sync_lock: Arc<Mutex<()>>,
|
||||
|
||||
pub(crate) validators: TendermintValidators<T>,
|
||||
|
||||
pub(crate) providers: Arc<AsyncRwLock<Option<T::CIDP>>>,
|
||||
pub(crate) importing_block: Arc<RwLock<Option<<T::Block as Block>::Hash>>>,
|
||||
|
||||
// A set of blocks which we're willing to recheck
|
||||
// We reject blocks with invalid inherents, yet inherents can be fatally flawed or solely
|
||||
// perceived as flawed
|
||||
// If we solely perceive them as flawed, we mark them as eligible for being checked again. Then,
|
||||
// if they're proposed again, we see if our perception has changed
|
||||
pub(crate) recheck: Arc<RwLock<HashSet<<T::Block as Block>::Hash>>>,
|
||||
|
||||
pub(crate) client: Arc<T::Client>,
|
||||
pub(crate) queue: Arc<AsyncRwLock<Option<InstantiatedTendermintImportQueue<T>>>>,
|
||||
}
|
||||
|
||||
impl<T: TendermintValidator> Clone for TendermintImport<T> {
|
||||
fn clone(&self) -> Self {
|
||||
TendermintImport {
|
||||
sync_lock: self.sync_lock.clone(),
|
||||
|
||||
validators: self.validators.clone(),
|
||||
|
||||
providers: self.providers.clone(),
|
||||
importing_block: self.importing_block.clone(),
|
||||
recheck: self.recheck.clone(),
|
||||
|
||||
client: self.client.clone(),
|
||||
queue: self.queue.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: TendermintValidator> TendermintImport<T> {
|
||||
pub(crate) fn new(client: Arc<T::Client>) -> TendermintImport<T> {
|
||||
TendermintImport {
|
||||
sync_lock: Arc::new(Mutex::new(())),
|
||||
|
||||
validators: TendermintValidators::new(client.clone()),
|
||||
|
||||
providers: Arc::new(AsyncRwLock::new(None)),
|
||||
importing_block: Arc::new(RwLock::new(None)),
|
||||
recheck: Arc::new(RwLock::new(HashSet::new())),
|
||||
|
||||
client,
|
||||
queue: Arc::new(AsyncRwLock::new(None)),
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) async fn inherent_data(&self, parent: <T::Block as Block>::Hash) -> InherentData {
|
||||
match self
|
||||
.providers
|
||||
.read()
|
||||
.await
|
||||
.as_ref()
|
||||
.unwrap()
|
||||
.create_inherent_data_providers(parent, ())
|
||||
.await
|
||||
{
|
||||
Ok(providers) => match providers.create_inherent_data() {
|
||||
Ok(data) => Some(data),
|
||||
Err(err) => {
|
||||
warn!(target: "tendermint", "Failed to create inherent data: {}", err);
|
||||
None
|
||||
}
|
||||
},
|
||||
Err(err) => {
|
||||
warn!(target: "tendermint", "Failed to create inherent data providers: {}", err);
|
||||
None
|
||||
}
|
||||
}
|
||||
.unwrap_or_else(InherentData::new)
|
||||
}
|
||||
|
||||
async fn check_inherents(
|
||||
&self,
|
||||
hash: <T::Block as Block>::Hash,
|
||||
block: T::Block,
|
||||
) -> Result<(), Error> {
|
||||
let inherent_data = self.inherent_data(*block.header().parent_hash()).await;
|
||||
let err = self
|
||||
.client
|
||||
.runtime_api()
|
||||
.check_inherents(&BlockId::Hash(self.client.info().finalized_hash), block, inherent_data)
|
||||
.map_err(|_| Error::Other(BlockError::Fatal.into()))?;
|
||||
|
||||
if err.ok() {
|
||||
self.recheck.write().unwrap().remove(&hash);
|
||||
Ok(())
|
||||
} else if err.fatal_error() {
|
||||
Err(Error::Other(BlockError::Fatal.into()))
|
||||
} else {
|
||||
debug!(target: "tendermint", "Proposed block has temporally wrong inherents");
|
||||
self.recheck.write().unwrap().insert(hash);
|
||||
Err(Error::Other(BlockError::Temporal.into()))
|
||||
}
|
||||
}
|
||||
|
||||
// Ensure this is part of a sequential import
|
||||
pub(crate) fn verify_order(
|
||||
&self,
|
||||
parent: <T::Block as Block>::Hash,
|
||||
number: <<T::Block as Block>::Header as Header>::Number,
|
||||
) -> Result<(), Error> {
|
||||
let info = self.client.info();
|
||||
if (info.finalized_hash != parent) || ((info.finalized_number + 1u16.into()) != number) {
|
||||
Err(Error::Other("non-sequential import".into()))?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Do not allow blocks from the traditional network to be broadcast
|
||||
// Only allow blocks from Tendermint
|
||||
// Tendermint's propose message could be rewritten as a seal OR Tendermint could produce blocks
|
||||
// which this checks the proposer slot for, and then tells the Tendermint machine
|
||||
// While those would be more seamless with Substrate, there's no actual benefit to doing so
|
||||
fn verify_origin(&self, hash: <T::Block as Block>::Hash) -> Result<(), Error> {
|
||||
if let Some(tm_hash) = *self.importing_block.read().unwrap() {
|
||||
if hash == tm_hash {
|
||||
return Ok(());
|
||||
}
|
||||
}
|
||||
Err(Error::Other("block created outside of tendermint".into()))
|
||||
}
|
||||
|
||||
// Errors if the justification isn't valid
|
||||
pub(crate) fn verify_justification(
|
||||
&self,
|
||||
hash: <T::Block as Block>::Hash,
|
||||
justification: &Justification,
|
||||
) -> Result<(), Error> {
|
||||
if justification.0 != CONSENSUS_ID {
|
||||
Err(Error::InvalidJustification)?;
|
||||
}
|
||||
|
||||
let commit: Commit<TendermintValidators<T>> =
|
||||
Commit::decode(&mut justification.1.as_ref()).map_err(|_| Error::InvalidJustification)?;
|
||||
// Create a stubbed TendermintAuthority so we can verify the commit
|
||||
if !TendermintAuthority::stub(self.clone()).verify_commit(hash, &commit) {
|
||||
Err(Error::InvalidJustification)?;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// Verifies the justifications aren't malformed, not that the block is justified
|
||||
// Errors if justifications is neither empty nor a single Tendermint justification
|
||||
// If the block does have a justification, finalized will be set to true
|
||||
fn verify_justifications<BT>(
|
||||
&self,
|
||||
block: &mut BlockImportParams<T::Block, BT>,
|
||||
) -> Result<(), Error> {
|
||||
if !block.finalized {
|
||||
if let Some(justifications) = &block.justifications {
|
||||
let mut iter = justifications.iter();
|
||||
let next = iter.next();
|
||||
if next.is_none() || iter.next().is_some() {
|
||||
Err(Error::InvalidJustification)?;
|
||||
}
|
||||
self.verify_justification(block.header.hash(), next.unwrap())?;
|
||||
block.finalized = true;
|
||||
}
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub(crate) async fn check<BT>(
|
||||
&self,
|
||||
block: &mut BlockImportParams<T::Block, BT>,
|
||||
) -> Result<(), Error> {
|
||||
if block.finalized {
|
||||
if block.fork_choice != Some(ForkChoiceStrategy::Custom(false)) {
|
||||
// Since we alw1ays set the fork choice, this means something else marked the block as
|
||||
// finalized, which shouldn't be possible. Ensuring nothing else is setting blocks as
|
||||
// finalized helps ensure our security
|
||||
panic!("block was finalized despite not setting the fork choice");
|
||||
}
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Set the block as a worse choice
|
||||
block.fork_choice = Some(ForkChoiceStrategy::Custom(false));
|
||||
|
||||
self.verify_order(*block.header.parent_hash(), *block.header.number())?;
|
||||
self.verify_justifications(block)?;
|
||||
|
||||
// If the block wasn't finalized, verify the origin and validity of its inherents
|
||||
if !block.finalized {
|
||||
let hash = block.header.hash();
|
||||
self.verify_origin(hash)?;
|
||||
self
|
||||
.check_inherents(hash, T::Block::new(block.header.clone(), block.body.clone().unwrap()))
|
||||
.await?;
|
||||
}
|
||||
|
||||
// Additionally check these fields are empty
|
||||
// They *should* be unused, so requiring their emptiness prevents malleability and ensures
|
||||
// nothing slips through
|
||||
if !block.post_digests.is_empty() {
|
||||
Err(Error::Other("post-digests included".into()))?;
|
||||
}
|
||||
if !block.auxiliary.is_empty() {
|
||||
Err(Error::Other("auxiliary included".into()))?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
190
substrate/tendermint/client/src/validators.rs
Normal file
190
substrate/tendermint/client/src/validators.rs
Normal file
@@ -0,0 +1,190 @@
|
||||
use core::ops::Deref;
|
||||
use std::sync::{Arc, RwLock};
|
||||
|
||||
use async_trait::async_trait;
|
||||
|
||||
use sp_core::Decode;
|
||||
use sp_application_crypto::{
|
||||
RuntimePublic as PublicTrait,
|
||||
sr25519::{Public, Signature},
|
||||
};
|
||||
use sp_keystore::CryptoStore;
|
||||
|
||||
use sp_staking::SessionIndex;
|
||||
use sp_api::{BlockId, ProvideRuntimeApi};
|
||||
|
||||
use sc_client_api::HeaderBackend;
|
||||
|
||||
use tendermint_machine::ext::{BlockNumber, RoundNumber, Weights, Signer, SignatureScheme};
|
||||
|
||||
use sp_tendermint::TendermintApi;
|
||||
|
||||
use crate::{KEY_TYPE_ID, TendermintClient};
|
||||
|
||||
struct TendermintValidatorsStruct {
|
||||
session: SessionIndex,
|
||||
|
||||
total_weight: u64,
|
||||
weights: Vec<u64>,
|
||||
|
||||
lookup: Vec<Public>,
|
||||
}
|
||||
|
||||
impl TendermintValidatorsStruct {
|
||||
fn from_module<T: TendermintClient>(client: &Arc<T::Client>) -> Self {
|
||||
let last = client.info().finalized_hash;
|
||||
let api = client.runtime_api();
|
||||
let session = api.current_session(&BlockId::Hash(last)).unwrap();
|
||||
let validators = api.validators(&BlockId::Hash(last)).unwrap();
|
||||
|
||||
Self {
|
||||
session,
|
||||
|
||||
// TODO
|
||||
total_weight: validators.len().try_into().unwrap(),
|
||||
weights: vec![1; validators.len()],
|
||||
|
||||
lookup: validators,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Wrap every access of the validators struct in something which forces calling refresh
|
||||
struct Refresh<T: TendermintClient> {
|
||||
client: Arc<T::Client>,
|
||||
_refresh: Arc<RwLock<TendermintValidatorsStruct>>,
|
||||
}
|
||||
|
||||
impl<T: TendermintClient> Refresh<T> {
|
||||
// If the session has changed, re-create the struct with the data on it
|
||||
fn refresh(&self) {
|
||||
let session = self._refresh.read().unwrap().session;
|
||||
let current_block = BlockId::Hash(self.client.info().finalized_hash);
|
||||
if session != self.client.runtime_api().current_session(¤t_block).unwrap() {
|
||||
*self._refresh.write().unwrap() = TendermintValidatorsStruct::from_module::<T>(&self.client);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: TendermintClient> Deref for Refresh<T> {
|
||||
type Target = RwLock<TendermintValidatorsStruct>;
|
||||
fn deref(&self) -> &RwLock<TendermintValidatorsStruct> {
|
||||
self.refresh();
|
||||
&self._refresh
|
||||
}
|
||||
}
|
||||
|
||||
/// Tendermint validators observer, providing data on the active validators.
|
||||
pub struct TendermintValidators<T: TendermintClient>(Refresh<T>);
|
||||
impl<T: TendermintClient> Clone for TendermintValidators<T> {
|
||||
fn clone(&self) -> Self {
|
||||
Self(Refresh { _refresh: self.0._refresh.clone(), client: self.0.client.clone() })
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: TendermintClient> TendermintValidators<T> {
|
||||
pub(crate) fn new(client: Arc<T::Client>) -> TendermintValidators<T> {
|
||||
TendermintValidators(Refresh {
|
||||
_refresh: Arc::new(RwLock::new(TendermintValidatorsStruct::from_module::<T>(&client))),
|
||||
client,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
pub struct TendermintSigner<T: TendermintClient>(
|
||||
pub(crate) Arc<dyn CryptoStore>,
|
||||
pub(crate) TendermintValidators<T>,
|
||||
);
|
||||
|
||||
impl<T: TendermintClient> Clone for TendermintSigner<T> {
|
||||
fn clone(&self) -> Self {
|
||||
Self(self.0.clone(), self.1.clone())
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: TendermintClient> TendermintSigner<T> {
|
||||
async fn get_public_key(&self) -> Public {
|
||||
let pubs = self.0.sr25519_public_keys(KEY_TYPE_ID).await;
|
||||
if pubs.is_empty() {
|
||||
self.0.sr25519_generate_new(KEY_TYPE_ID, None).await.unwrap()
|
||||
} else {
|
||||
pubs[0]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl<T: TendermintClient> Signer for TendermintSigner<T> {
|
||||
type ValidatorId = u16;
|
||||
type Signature = Signature;
|
||||
|
||||
async fn validator_id(&self) -> Option<u16> {
|
||||
let key = self.get_public_key().await;
|
||||
for (i, k) in (*self.1 .0).read().unwrap().lookup.iter().enumerate() {
|
||||
if k == &key {
|
||||
return Some(u16::try_from(i).unwrap());
|
||||
}
|
||||
}
|
||||
None
|
||||
}
|
||||
|
||||
async fn sign(&self, msg: &[u8]) -> Signature {
|
||||
Signature::decode(
|
||||
&mut self
|
||||
.0
|
||||
.sign_with(KEY_TYPE_ID, &self.get_public_key().await.into(), msg)
|
||||
.await
|
||||
.unwrap()
|
||||
.unwrap()
|
||||
.as_ref(),
|
||||
)
|
||||
.unwrap()
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: TendermintClient> SignatureScheme for TendermintValidators<T> {
|
||||
type ValidatorId = u16;
|
||||
type Signature = Signature;
|
||||
type AggregateSignature = Vec<Signature>;
|
||||
type Signer = TendermintSigner<T>;
|
||||
|
||||
fn verify(&self, validator: u16, msg: &[u8], sig: &Signature) -> bool {
|
||||
self.0.read().unwrap().lookup[usize::try_from(validator).unwrap()].verify(&msg, sig)
|
||||
}
|
||||
|
||||
fn aggregate(sigs: &[Signature]) -> Vec<Signature> {
|
||||
sigs.to_vec()
|
||||
}
|
||||
|
||||
fn verify_aggregate(&self, validators: &[u16], msg: &[u8], sigs: &Vec<Signature>) -> bool {
|
||||
if validators.len() != sigs.len() {
|
||||
return false;
|
||||
}
|
||||
for (v, sig) in validators.iter().zip(sigs.iter()) {
|
||||
if !self.verify(*v, msg, sig) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
true
|
||||
}
|
||||
}
|
||||
|
||||
impl<T: TendermintClient> Weights for TendermintValidators<T> {
|
||||
type ValidatorId = u16;
|
||||
|
||||
fn total_weight(&self) -> u64 {
|
||||
self.0.read().unwrap().total_weight
|
||||
}
|
||||
|
||||
fn weight(&self, id: u16) -> u64 {
|
||||
self.0.read().unwrap().weights[usize::try_from(id).unwrap()]
|
||||
}
|
||||
|
||||
// TODO: https://github.com/serai-dex/serai/issues/159
|
||||
fn proposer(&self, number: BlockNumber, round: RoundNumber) -> u16 {
|
||||
u16::try_from(
|
||||
(number.0 + u64::from(round.0)) % u64::try_from(self.0.read().unwrap().lookup.len()).unwrap(),
|
||||
)
|
||||
.unwrap()
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user