Skip to content

Commit

Permalink
feat(polka-storage-provider): submit PoSt on the pipeline (#617)
Browse files Browse the repository at this point in the history
  • Loading branch information
th7nder authored Dec 4, 2024
1 parent d562a49 commit 8728c61
Show file tree
Hide file tree
Showing 32 changed files with 549 additions and 131 deletions.
31 changes: 22 additions & 9 deletions docs/src/getting-started/storage-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,10 @@
Setting up the Storage Provider doesn't have a lot of science, but isn't automatic either!
In this guide, we'll cover how to get up and running with the Storage Provider.

## Generating the PoRep Parameters

First and foremost, to allow the Storage Provider to generate [PoRep](https://docs.filecoin.io/basics/the-blockchain/proofs#proof-of-replication-porep) proofs,
we need to first generate their parameters, we do that with the following command:
## Generating the Proofs Parameters

To allow the Storage Provider to generate [PoRep](https://docs.filecoin.io/basics/the-blockchain/proofs#proof-of-replication-porep)
and [PoSt](https://docs.filecoin.io/basics/the-blockchain/proofs#proof-of-spacetime-post) proofs we first generate the PoRep parameters:
```bash
$ polka-storage-provider-client proofs porep-params
Generating params for 2KiB sectors... It can take a couple of minutes ⌛
Expand All @@ -20,12 +19,24 @@ Generated parameters:
/home/user/polka-storage/2KiB.porep.vk
/home/user/polka-storage/2KiB.porep.vk.scale
```
Then, we generate the PoSt parameters:
```bash
$ polka-storage-provider-client proofs post-params
Generating PoSt params for 2KiB sectors... It can take a few secs ⌛
Generated parameters:
/home/user/polka-storage/2KiB.post.params
/home/user/polka-storage/2KiB.post.vk
/home/user/polka-storage/2KiB.post.vk.scale
```

As advertised, the command has generated the following files:
As advertised, the commands have generated the following files:

* `2KiB.porep.params` — The PoRep parameters
* `2KiB.porep.vk` — The verifying key
* `2KiB.porep.vk.scale` — The verifying key, encoded in SCALE format
* `2KiB.porep.vk` — The PoRep verifying key
* `2KiB.porep.vk.scale` — The PoRep verifying key, encoded in SCALE format
* `2KiB.post.params` — The PoSt parameters
* `2KiB.post.vk` — The PoSt verifying key
* `2KiB.post.vk.scale` — The PoSt verifying key, encoded in SCALE format

## Registering the Storage Provider

Expand Down Expand Up @@ -75,6 +86,7 @@ polka-storage-provider-server \
--seal-proof 2KiB \
--post-proof 2KiB \
--porep-parameters <POREP-PARAMS> \
--post-parameters <POST-PARAMS> \
--X-key <KEY>
```
Expand All @@ -84,13 +96,14 @@ polka-storage-provider-server \
--seal-proof 2KiB \
--post-proof 2KiB \
--porep-parameters "2KiB.porep.params" \
--post-parameters "2KiB.post.params" \
--sr25519-key "//Charlie"
```

Note that currently, `--seal-proof` and `--post-proof` only support `2KiB`.

`<POREP-PARAMS>` is the resulting `*.porep.params` file from the [first steps](#generating-the-porep-parameters),
in this case, `2KiB.porep.params`.
`<POREP-PARAMS>`/`<POST-PARAMS>` is the resulting `*.porep.params`/`*.post.params` file from the [first steps](#generating-the-proofs-parameters),
in this case, `2KiB.porep.params`/`2KiB.post.params`.

When ran like this, the server will assume a random directory for the database and the storage, however,
you can change that through the `--database-directory` and `--storage-directory`, respectively,
Expand Down
3 changes: 2 additions & 1 deletion examples/rpc_publish.sh
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
#!/usr/bin/env bash
set -e

if [ "$#" -ne 1 ]; then
echo "$0: input file required"
Expand Down Expand Up @@ -58,7 +59,7 @@ DEAL_JSON=$(
)
SIGNED_DEAL_JSON="$(RUST_LOG=error target/release/polka-storage-provider-client sign-deal --sr25519-key "$CLIENT" "$DEAL_JSON")"

(RUST_LOG=debug target/release/polka-storage-provider-server --sr25519-key "$PROVIDER" --seal-proof "2KiB" --post-proof "2KiB" --porep-parameters 2KiB.porep.params) &
(RUST_LOG=debug target/release/polka-storage-provider-server --sr25519-key "$PROVIDER" --seal-proof "2KiB" --post-proof "2KiB" --porep-parameters 2KiB.porep.params --post-parameters 2KiB.post.params) &
sleep 5 # gives time for the server to start

DEAL_CID="$(RUST_LOG=error target/release/polka-storage-provider-client propose-deal "$DEAL_JSON")"
Expand Down
5 changes: 4 additions & 1 deletion lib/polka-storage-proofs/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,14 @@
#![cfg_attr(not(feature = "std"), no_std)]

mod groth16;
pub mod post;

pub mod types;

pub use groth16::*;

#[cfg(feature = "std")]
pub mod post;

#[cfg(feature = "std")]
pub mod porep;

Expand Down
8 changes: 3 additions & 5 deletions lib/polka-storage-proofs/src/post/mod.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
#![cfg(feature = "std")]

use std::{
collections::BTreeMap,
path::{Path, PathBuf},
Expand All @@ -23,6 +21,8 @@ use crate::{
types::{Commitment, ProverId, Ticket},
};

pub type PoStParameters = groth16::MappedParameters<Bls12>;

/// Generates parameters for proving and verifying PoSt.
/// It should be called once and then reused across provers and the verifier.
/// Verifying Key is only needed for verification (no_std), rest of the params are required for proving (std).
Expand All @@ -44,9 +44,7 @@ pub fn generate_random_groth16_parameters(

/// Loads Groth16 parameters from the specified path.
/// Parameters needed to be serialized with [`groth16::Paramters::<Bls12>::write_bytes`].
pub fn load_groth16_parameters(
path: std::path::PathBuf,
) -> Result<groth16::MappedParameters<Bls12>, PoStError> {
pub fn load_groth16_parameters(path: std::path::PathBuf) -> Result<PoStParameters, PoStError> {
groth16::Parameters::<Bls12>::build_mapped_parameters(path.clone(), false)
.map_err(|e| PoStError::FailedToLoadGrothParameters(path, e))
}
Expand Down
20 changes: 8 additions & 12 deletions maat/tests/real_world.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,17 @@ use primitives::sector::SectorSize;
use storagext::{
clients::ProofsClientExt,
runtime::runtime_types::{
bounded_collections::bounded_vec::BoundedVec,
pallet_market::pallet::DealState,
pallet_storage_provider::{proofs::SubmitWindowedPoStParams, sector::ProveCommitResult},
pallet_market::pallet::DealState, pallet_storage_provider::sector::ProveCommitResult,
},
types::{
market::DealProposal,
proofs::VerifyingKey,
storage_provider::{
FaultDeclaration, ProveCommitSector, RecoveryDeclaration, SectorPreCommitInfo,
SubmitWindowedPoStParams,
},
},
IntoBoundedByteVec, MarketClientExt, PolkaStorageConfig, StorageProviderClientExt,
SystemClientExt,
MarketClientExt, PolkaStorageConfig, StorageProviderClientExt, SystemClientExt,
};
use subxt::ext::sp_core::sr25519::Pair as Sr25519Pair;
use zombienet_sdk::NetworkConfigExt;
Expand Down Expand Up @@ -326,13 +324,11 @@ where
charlie,
SubmitWindowedPoStParams {
deadline: 0,
partitions: BoundedVec(vec![0]),
proof:
storagext::runtime::runtime_types::pallet_storage_provider::proofs::PoStProof {
post_proof:
primitives::proofs::RegisteredPoStProof::StackedDRGWindow2KiBV1P1,
proof_bytes: "beef".to_string().into_bounded_byte_vec(),
},
partitions: vec![0],
proof: storagext::types::storage_provider::PoStProof {
post_proof: primitives::proofs::RegisteredPoStProof::StackedDRGWindow2KiBV1P1,
proof_bytes: "beef".as_bytes().to_vec(),
},
},
true,
)
Expand Down
18 changes: 8 additions & 10 deletions pallets/storage-provider/src/deadline.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@ use alloc::{collections::BTreeSet, vec::Vec};

use codec::{Decode, Encode};
use frame_support::{pallet_prelude::*, sp_runtime::BoundedBTreeMap};
use primitives::sector::SectorNumber;
use primitives::{sector::SectorNumber, PartitionNumber, MAX_PARTITIONS_PER_DEADLINE, MAX_SECTORS};
use scale_info::{prelude::cmp, TypeInfo};

use crate::{
error::GeneralPalletError,
expiration_queue::ExpirationSet,
partition::{Partition, PartitionNumber, TerminationResult, MAX_PARTITIONS_PER_DEADLINE},
sector::{SectorOnChainInfo, MAX_SECTORS},
partition::{Partition, TerminationResult},
sector::SectorOnChainInfo,
sector_map::PartitionMap,
};

Expand Down Expand Up @@ -791,16 +791,14 @@ mod tests {
use alloc::collections::{BTreeMap, BTreeSet};

use frame_support::{pallet_prelude::*, sp_runtime::BoundedBTreeSet};
use primitives::{sector::SectorNumber, MAX_SECTORS, MAX_TERMINATIONS_PER_CALL};
use primitives::{
sector::SectorNumber, PartitionNumber, MAX_SECTORS, MAX_TERMINATIONS_PER_CALL,
};
use rstest::rstest;

use crate::{
deadline::Deadline,
error::GeneralPalletError,
partition::{PartitionNumber, TerminationResult},
sector::SectorOnChainInfo,
sector_map::PartitionMap,
tests::sector_set,
deadline::Deadline, error::GeneralPalletError, partition::TerminationResult,
sector::SectorOnChainInfo, sector_map::PartitionMap, tests::sector_set,
};

const PARTITION_SIZE: u64 = 4;
Expand Down
7 changes: 2 additions & 5 deletions pallets/storage-provider/src/expiration_queue.rs
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,12 @@ use alloc::{
use core::ops::Not;

use codec::{Decode, Encode};
use primitives::sector::SectorNumber;
use primitives::{sector::SectorNumber, MAX_SECTORS};
use scale_info::TypeInfo;
use sp_core::{ConstU32, RuntimeDebug};
use sp_runtime::{BoundedBTreeMap, BoundedBTreeSet};

use crate::{
error::GeneralPalletError,
sector::{SectorOnChainInfo, MAX_SECTORS},
};
use crate::{error::GeneralPalletError, sector::SectorOnChainInfo};

const LOG_TARGET: &'static str = "runtime::storage_provider::expiration_queue";

Expand Down
4 changes: 2 additions & 2 deletions pallets/storage-provider/src/fault.rs
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
use frame_support::{pallet_prelude::*, sp_runtime::BoundedBTreeSet};
use primitives::{sector::SectorNumber, MAX_TERMINATIONS_PER_CALL};
use primitives::{sector::SectorNumber, PartitionNumber, MAX_TERMINATIONS_PER_CALL};

use crate::{pallet::DECLARATIONS_MAX, partition::PartitionNumber};
use crate::pallet::DECLARATIONS_MAX;

/// Used by the storage provider to indicate a fault.
#[derive(Clone, RuntimeDebug, Decode, Encode, PartialEq, TypeInfo)]
Expand Down
80 changes: 73 additions & 7 deletions pallets/storage-provider/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -58,12 +58,14 @@ pub mod pallet {
use primitives::{
commitment::{CommD, CommR, Commitment},
pallets::{
CurrentDeadline, Market, ProofVerification, Randomness, StorageProviderValidation,
DeadlineInfo as ExternalDeadlineInfo, Market, ProofVerification, Randomness,
StorageProviderValidation,
},
proofs::{derive_prover_id, PublicReplicaInfo, RegisteredPoStProof},
randomness::{draw_randomness, DomainSeparationTag},
sector::SectorNumber,
MAX_SEAL_PROOF_BYTES, MAX_SECTORS_PER_CALL,
PartitionNumber, MAX_PARTITIONS_PER_DEADLINE, MAX_SEAL_PROOF_BYTES, MAX_SECTORS,
MAX_SECTORS_PER_CALL,
};
use scale_info::TypeInfo;
use sp_arithmetic::traits::Zero;
Expand All @@ -74,12 +76,10 @@ pub mod pallet {
DeclareFaultsParams, DeclareFaultsRecoveredParams, FaultDeclaration,
RecoveryDeclaration,
},
partition::{PartitionNumber, MAX_PARTITIONS_PER_DEADLINE},
proofs::{assign_proving_period_offset, SubmitWindowedPoStParams},
sector::{
ProveCommitResult, ProveCommitSector, SectorOnChainInfo, SectorPreCommitInfo,
SectorPreCommitOnChainInfo, TerminateSectorsParams, TerminationDeclaration,
MAX_SECTORS,
},
sector_map::DeadlineSectorMap,
storage_provider::{
Expand Down Expand Up @@ -1072,10 +1072,10 @@ pub mod pallet {
///
/// If there is no Storage Provider of given AccountId returns [`Option::None`].
/// May exceptionally return [`Option::None`] when
/// conversion between BlockNumbers fails, but technically should not ever happen.
/// conversion between BlockNumbers fails, but technically should never happen.
pub fn current_deadline(
storage_provider: &T::AccountId,
) -> Option<CurrentDeadline<BlockNumberFor<T>>> {
) -> Option<ExternalDeadlineInfo<BlockNumberFor<T>>> {
let sp = StorageProviders::<T>::try_get(storage_provider).ok()?;
let current_block = <frame_system::Pallet<T>>::block_number();

Expand All @@ -1090,14 +1090,80 @@ pub mod pallet {
)
.ok()?;

Some(CurrentDeadline {
Some(ExternalDeadlineInfo {
deadline_index: deadline.idx,
open: deadline.is_open(),
challenge_block: deadline.challenge,
start: deadline.open_at,
})
}

/// Gets the current deadline of the storage provider.
///
/// If there is no Storage Provider of given AccountId returns [`Option::None`].
/// May exceptionally return [`Option::None`] when
/// conversion between BlockNumbers fails, but technically should never happen.
pub fn deadline_info(
storage_provider: &T::AccountId,
deadline_index: u64,
) -> Option<ExternalDeadlineInfo<BlockNumberFor<T>>> {
let sp = StorageProviders::<T>::try_get(storage_provider).ok()?;
let current_block = <frame_system::Pallet<T>>::block_number();

let deadline = DeadlineInfo::new(
current_block,
sp.proving_period_start,
deadline_index,
T::WPoStPeriodDeadlines::get(),
T::WPoStProvingPeriod::get(),
T::WPoStChallengeWindow::get(),
T::WPoStChallengeLookBack::get(),
T::FaultDeclarationCutoff::get(),
)
.ok()?;

Some(ExternalDeadlineInfo {
deadline_index: deadline.idx,
open: deadline.is_open(),
challenge_block: deadline.challenge,
start: deadline.open_at,
})
}

/// Returns snapshot information about the deadline, i.e. which sectors are assigned to which partitions.
/// When the deadline has not opened yet (deadline_start - WPoStChallengeWindow), it can change!
pub fn deadline_state(
storage_provider: &T::AccountId,
deadline_index: u64,
) -> Option<primitives::pallets::DeadlineState> {
let sp = StorageProviders::<T>::try_get(storage_provider).ok()?;
let deadline_index: usize = deadline_index.try_into().ok()?;

if deadline_index >= sp.deadlines.due.len() {
log::warn!(
"tried to get non existing deadline: {}/{}",
deadline_index,
sp.deadlines.due.len()
);
return None;
}

let deadline = &sp.deadlines.due[deadline_index];
let mut partitions = BoundedBTreeMap::new();
for (partition_number, partition) in deadline.partitions.iter() {
partitions
.try_insert(
*partition_number,
primitives::pallets::PartitionState {
sectors: partition.sectors.clone(),
},
)
.ok()?;
}

Some(primitives::pallets::DeadlineState { partitions })
}

fn validate_expiration(
curr_block: BlockNumberFor<T>,
activation: BlockNumberFor<T>,
Expand Down
8 changes: 2 additions & 6 deletions pallets/storage-provider/src/partition.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,16 @@ use core::ops::AddAssign;

use codec::{Decode, Encode};
use frame_support::{pallet_prelude::*, sp_runtime::BoundedBTreeSet};
use primitives::sector::SectorNumber;
use primitives::{sector::SectorNumber, MAX_SECTORS};
use scale_info::TypeInfo;

use crate::{
error::GeneralPalletError,
expiration_queue::{ExpirationQueue, ExpirationSet},
sector::{SectorOnChainInfo, MAX_SECTORS},
sector::SectorOnChainInfo,
};

/// Max amount of partitions per deadline.
/// ref: <https://github.com/filecoin-project/builtin-actors/blob/82d02e58f9ef456aeaf2a6c737562ac97b22b244/runtime/src/runtime/policy.rs#L283>
pub const MAX_PARTITIONS_PER_DEADLINE: u32 = 3000;
const LOG_TARGET: &'static str = "runtime::storage_provider::partition";
pub type PartitionNumber = u32;

#[derive(Clone, RuntimeDebug, Decode, Encode, PartialEq, TypeInfo)]
pub struct Partition<BlockNumber>
Expand Down
6 changes: 3 additions & 3 deletions pallets/storage-provider/src/proofs.rs
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@ use frame_support::{
pallet_prelude::{ConstU32, RuntimeDebug},
sp_runtime::BoundedVec,
};
use primitives::{proofs::RegisteredPoStProof, MAX_POST_PROOF_BYTES};
use primitives::{
proofs::RegisteredPoStProof, PartitionNumber, MAX_PARTITIONS_PER_DEADLINE, MAX_POST_PROOF_BYTES,
};
use scale_info::TypeInfo;
use sp_core::blake2_64;

use crate::partition::{PartitionNumber, MAX_PARTITIONS_PER_DEADLINE};

/// Proof of Spacetime data stored on chain.
#[derive(RuntimeDebug, Decode, Encode, TypeInfo, PartialEq, Eq, Clone)]
pub struct PoStProof {
Expand Down
Loading

0 comments on commit 8728c61

Please sign in to comment.