Move develop to patch-polkadot-sdk (#678)

* Update `build-dependencies` CI action

* Update `develop` to `patch-polkadot-sdk`

Allows us to finally remove the old `serai-dex/substrate` repository _and_
should have CI pass without issue on `develop` again.

The changes made here should be trivial and maintain all prior
behavior/functionality. The most notable are to `chain_spec.rs`, in order to
still use a SCALE-encoded `GenesisConfig` (avoiding `serde_json`).

* CI fixes

* Add `/usr/local/opt/llvm/lib` to paths on macOS hosts

* Attempt to use `LD_LIBRARY_PATH` in macOS GitHub CI

* Use `libp2p 0.56` in `serai-node`

* Correct Windows build dependencies

* Correct `llvm/lib` path on macOS

* Correct how macOS 13 and 14 have different homebrew paths

* Use `sw_vers` instead of `uname` on macOS

Yields the macOS version instead of the kernel's version.

* Replace hard-coded path with the intended env variable to fix macOS 13

* Add `libclang-dev` as dependency to the Debian Dockerfile

* Set the `CODE` storage slot

* Update to a version of substrate without `wasmtimer`

Turns out `wasmtimer` is WASM only. This should restore the node's functioning
on non-WASM environments.

* Restore `clang` as a dependency due to the Debian Dockerfile as we require a C++ compiler

* Move from Debian bookworm to trixie

* Restore `chain_getBlockBin` to the RPC

* Always generate a new key for the P2P network

* Mention every account on-chain before they publish a transaction

`CheckNonce` required accounts have a provider in order to even have their
nonce considered. This shims that by claiming every account has a provider at
the start of a block, if it signs a transaction.

The actual execution could presumably diverge between block building (which
sets the provider before each transaction) and execution (which sets the
providers at the start of the block). It doesn't diverge in our current
configuration and it won't be propagated to `next` (which doesn't use
`CheckNonce`).

Also uses explicit indexes for the `serai_abi::{Call, Event}` `enum`s.

* Adopt `patch-polkadot-sdk` with fixed peering

* Manually insert the authority discovery key into the keystore

I did try pulling in `pallet-authority-discovery` for this, updating
`SessionKeys`, but that was insufficient for whatever reason.

* Update to latest `substrate-wasm-builder`

* Fix timeline for incrementing providers

e1671dd71b incremented the providers for every
single transaction's sender before execution, noting the solution was fragile
but it worked for us at this time. It did not work for us at this time.

The new solution replaces `inc_providers` with direct access to the `Account`
`StorageMap` to increment the providers, achieving the desired goal, _without_
emitting an event (which is ordered, and the disparate order between building
and execution was causing mismatches of the state root).

This solution is also fragile and may also be insufficient. None of this code
exists anymore on `next` however. It just has to work sufficiently for now.

* clippy
This commit is contained in:
Luke Parker
2025-10-05 10:58:08 -04:00
committed by GitHub
parent 55ed33d2d1
commit 7d49366373
107 changed files with 4253 additions and 3623 deletions

View File

@@ -8,12 +8,11 @@ use sp_consensus_babe::{SlotDuration, inherents::InherentDataProvider as BabeInh
use sp_io::SubstrateHostFunctions;
use sc_executor::{sp_wasm_interface::ExtendedHostFunctions, WasmExecutor};
use sc_network_common::sync::warp::WarpSyncParams;
use sc_network::{Event, NetworkEventStream};
use sc_network::{Event, NetworkEventStream, NetworkBackend};
use sc_service::{error::Error as ServiceError, Configuration, TaskManager, TFullClient};
use sc_transaction_pool_api::OffchainTransactionPoolFactory;
use sc_client_api::{BlockBackend, Backend};
use sc_client_api::BlockBackend;
use sc_telemetry::{Telemetry, TelemetryWorker};
@@ -40,8 +39,8 @@ type PartialComponents = sc_service::PartialComponents<
FullClient,
FullBackend,
SelectChain,
sc_consensus::DefaultImportQueue<Block, FullClient>,
sc_transaction_pool::FullPool<Block, FullClient>,
sc_consensus::DefaultImportQueue<Block>,
sc_transaction_pool::TransactionPoolWrapper<Block, FullClient>,
(
BabeBlockImport,
sc_consensus_babe::BabeLink<Block>,
@@ -74,11 +73,11 @@ pub fn new_partial(
#[allow(deprecated)]
let executor = Executor::new(
config.wasm_method,
config.default_heap_pages,
config.max_runtime_instances,
config.executor.wasm_method,
config.executor.default_heap_pages,
config.executor.max_runtime_instances,
None,
config.runtime_cache_size,
config.executor.runtime_cache_size,
);
let (client, backend, keystore_container, task_manager) =
@@ -103,16 +102,19 @@ pub fn new_partial(
let select_chain = sc_consensus::LongestChain::new(backend.clone());
let transaction_pool = sc_transaction_pool::BasicPool::new_full(
config.transaction_pool.clone(),
config.role.is_authority().into(),
config.prometheus_registry(),
let transaction_pool = sc_transaction_pool::Builder::new(
task_manager.spawn_essential_handle(),
client.clone(),
);
config.role.is_authority().into(),
)
.with_options(config.transaction_pool.clone())
.with_prometheus(config.prometheus_registry())
.build();
let transaction_pool = Arc::new(transaction_pool);
let (grandpa_block_import, grandpa_link) = grandpa::block_import(
client.clone(),
u32::MAX,
&client,
select_chain.clone(),
telemetry.as_ref().map(Telemetry::handle),
@@ -181,22 +183,26 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
config.network.listen_addresses =
vec!["/ip4/0.0.0.0/tcp/30333".parse().unwrap(), "/ip6/::/tcp/30333".parse().unwrap()];
let mut net_config = sc_network::config::FullNetworkConfiguration::new(&config.network);
type N = sc_network::service::NetworkWorker<Block, <Block as sp_runtime::traits::Block>::Hash>;
let mut net_config = sc_network::config::FullNetworkConfiguration::<_, _, N>::new(
&config.network,
config.prometheus_registry().cloned(),
);
let metrics = N::register_notification_metrics(config.prometheus_registry());
let grandpa_protocol_name =
grandpa::protocol_standard_name(&client.block_hash(0).unwrap().unwrap(), &config.chain_spec);
net_config.add_notification_protocol(sc_consensus_grandpa::grandpa_peers_set_config(
grandpa_protocol_name.clone(),
));
let (grandpa_protocol_config, grandpa_notification_service) =
sc_consensus_grandpa::grandpa_peers_set_config::<Block, N>(
grandpa_protocol_name.clone(),
metrics.clone(),
net_config.peer_store_handle(),
);
net_config.add_notification_protocol(grandpa_protocol_config);
let publish_non_global_ips = config.network.allow_non_globals_in_dht;
let warp_sync = Arc::new(grandpa::warp_proof::NetworkProvider::new(
backend.clone(),
grandpa_link.shared_authority_set().clone(),
vec![],
));
let (network, system_rpc_tx, tx_handler_controller, network_starter, sync_service) =
let (network, system_rpc_tx, tx_handler_controller, sync_service) =
sc_service::build_network(sc_service::BuildNetworkParams {
config: &config,
net_config,
@@ -205,7 +211,9 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
spawn_handle: task_manager.spawn_handle(),
import_queue,
block_announce_validator_builder: None,
warp_sync_params: Some(WarpSyncParams::WithProvider(warp_sync)),
metrics,
block_relay: None,
warp_sync_config: None,
})?;
task_manager.spawn_handle().spawn("bootnodes", "bootnodes", {
@@ -217,7 +225,15 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
// While the PeerIds *should* be known in advance and hardcoded, that data wasn't collected in
// time and this fine for a testnet
let bootnodes = || async {
use libp2p::{Transport as TransportTrait, tcp::tokio::Transport, noise::Config};
use libp2p::{
core::{
Endpoint,
transport::{PortUse, DialOpts},
},
Transport as TransportTrait,
tcp::tokio::Transport,
noise::Config,
};
let bootnode_multiaddrs = crate::chain_spec::bootnode_multiaddrs(&id);
@@ -231,9 +247,17 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
.upgrade(libp2p::core::upgrade::Version::V1)
.authenticate(noise)
.multiplex(libp2p::yamux::Config::default());
let Ok(transport) = transport.dial(multiaddr.clone()) else { None? };
let Ok(transport) = transport.dial(
multiaddr.clone(),
DialOpts { role: Endpoint::Dialer, port_use: PortUse::Reuse },
) else {
None?
};
let Ok((peer_id, _)) = transport.await else { None? };
Some(sc_network::config::MultiaddrWithPeerId { multiaddr, peer_id })
Some(sc_network::config::MultiaddrWithPeerId {
multiaddr: multiaddr.into(),
peer_id: peer_id.into(),
})
}),
));
}
@@ -261,26 +285,12 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
}
});
if config.offchain_worker.enabled {
task_manager.spawn_handle().spawn(
"offchain-workers-runner",
"offchain-worker",
sc_offchain::OffchainWorkers::new(sc_offchain::OffchainWorkerOptions {
runtime_api_provider: client.clone(),
is_validator: config.role.is_authority(),
keystore: Some(keystore_container.clone()),
offchain_db: backend.offchain_storage(),
transaction_pool: Some(OffchainTransactionPoolFactory::new(transaction_pool.clone())),
network_provider: network.clone(),
enable_http_requests: true,
custom_extensions: |_| vec![],
})
.run(client.clone(), task_manager.spawn_handle()),
);
}
let role = config.role.clone();
let role = config.role;
let keystore = keystore_container;
if let Some(seed) = config.dev_key_seed.as_ref() {
let _ =
keystore.sr25519_generate_new(sp_core::crypto::key_types::AUTHORITY_DISCOVERY, Some(seed));
}
let prometheus_registry = config.prometheus_registry().cloned();
// TODO: Ensure we're considered as an authority is a validator of an external network
@@ -294,7 +304,7 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
worker
},
client.clone(),
network.clone(),
Arc::new(network.clone()),
Box::pin(network.event_stream("authority-discovery").filter_map(|e| async move {
match e {
Event::Dht(e) => Some(e),
@@ -303,6 +313,7 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
})),
sc_authority_discovery::Role::PublishAndDiscover(keystore.clone()),
prometheus_registry.clone(),
task_manager.spawn_handle(),
);
task_manager.spawn_handle().spawn(
"authority-discovery-worker",
@@ -320,12 +331,11 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
let client = client.clone();
let pool = transaction_pool.clone();
Box::new(move |deny_unsafe, _| {
Box::new(move |_| {
crate::rpc::create_full(crate::rpc::FullDeps {
id: id.clone(),
client: client.clone(),
pool: pool.clone(),
deny_unsafe,
authority_discovery: authority_discovery.clone(),
})
.map_err(Into::into)
@@ -392,7 +402,7 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
grandpa::run_grandpa_voter(grandpa::GrandpaParams {
config: grandpa::Config {
gossip_duration: std::time::Duration::from_millis(333),
justification_period: 512,
justification_generation_period: 512,
name: Some(name),
observer_enabled: false,
keystore: if role.is_authority() { Some(keystore) } else { None },
@@ -408,10 +418,10 @@ pub fn new_full(mut config: Configuration) -> Result<TaskManager, ServiceError>
prometheus_registry,
shared_voter_state,
offchain_tx_pool_factory: OffchainTransactionPoolFactory::new(transaction_pool),
notification_service: grandpa_notification_service,
})?,
);
}
network_starter.start_network();
Ok(task_manager)
}