From 0ce3b677c48663aef09a9b854b3d0fc797433ba2 Mon Sep 17 00:00:00 2001
From: dtuzi <114731726+dtuzi@users.noreply.github.com>
Date: Fri, 22 Nov 2024 10:50:57 +0200
Subject: [PATCH] docs: cleanup (#598)
---
README.md | 26 ++++-------
docs/src/SUMMARY.md | 1 -
docs/src/background.md | 93 ----------------------------------------
docs/src/introduction.md | 14 +++---
4 files changed, 15 insertions(+), 119 deletions(-)
delete mode 100644 docs/src/background.md
diff --git a/README.md b/README.md
index 8c080ff57..654c74f0e 100644
--- a/README.md
+++ b/README.md
@@ -2,28 +2,18 @@
Welcome to the Polka Storage project repo!
-This project aims to deliver a Polkadot-native system parachain for data storage.
+This project aims to build a native storage network for Polkadot.
-You can find our in-depth documentation at .
+Check out the book at .
-* If you're looking to test the project, check out [Getting Started guide](https://eigerco.github.io/polka-storage/getting-started/index.html).
-* If you're looking to get your hands dirty and help with development, take a look at [`DEVELOPMENT.md`](./DEVELOPMENT.md) to get started.
+- If you'd like to contribute, take a look at [`DEVELOPMENT.md`](./DEVELOPMENT.md) to get started.
-More information available about the project's genesis in:
+- Weekly dev updates https://forum.polkadot.network/t/polkadot-native-storage-updates/7021
-- OpenGov Referendum - Part 1 —
-- OpenGov Referendum - Part 2 —
-- Research Report —
-- Polkadot Forum News Post —
----
+## About Eiger
-
-
-
-
-
+We are engineers. We contribute to various ecosystems. On Polkadot [we integrated Move to Substrate](https://x.com/Polkadot/status/1816109501394637034) allowing builders to integrate the Move VM and execute Move smart contracts. We are building Polkadot Storage (this project) and building our Jam client [Strawberry](https://github.com/eigerco/strawberry) the first serious project to go on [review](https://github.com/w3f/jam-milestone-delivery/pull/6) for reaching Milestone 1.
+
+Contact us at hello@eiger.co
\ No newline at end of file
diff --git a/docs/src/SUMMARY.md b/docs/src/SUMMARY.md
index 29800303e..999cad574 100644
--- a/docs/src/SUMMARY.md
+++ b/docs/src/SUMMARY.md
@@ -1,7 +1,6 @@
# Summary
[Introduction](./introduction.md)
-[Background](./background.md)
# Guides
diff --git a/docs/src/background.md b/docs/src/background.md
deleted file mode 100644
index e31c21331..000000000
--- a/docs/src/background.md
+++ /dev/null
@@ -1,93 +0,0 @@
-# Background
-
-This page provides a quick background behind our approach to the storage challenge.
-
-## Why another storage system?
-
-In short, to be fully native to Polkadot (and as a bonus — no trusted execution environment is required).
-
-For a longer explanation, our vision is fully embedded in the Polkadot vision of the ubiquitous supercomputer; and as you know, a computer requires storage.
-
-Achieving storage native to Polkadot means using DOT as *the* token for our solution, instead of creating a parallel economy relying on yet another token, further fragmenting the space.
-
-## Why not X?
-
-We've decided to build our storage solution based on the ideas behind Filecoin, not because it's different from the other available networks in Polkadot, but rather because Filecoin does a lot of things right, and we want to bring them to Polkadot.
-
-For those unfamiliar with Filecoin, it is a blockchain network that provides a *file* storage marketplace. In a nutshell, people provide storage space (a storage provider), creating several offers (distinguished by price, reputation, etc) and clients can pick one of these storage providers to store their data for them.
-
-This raises the question — *How does the network know that the storage provider has my data?*
-
-Therein lies the crux of Filecoin! You can solve this in multiple ways, each with different levels of flexibility, but we will outline two.
-
-### Merkle Trees and Proofs
-
-> If you’re not familiar with Merkle trees, you can read the explainer from BitPanda —
->
-
-Consider that you build a Merkle tree out of a file, the root of the Merkle tree is derived of several layers of hashes meaning that if you change one of those layers, such as a leaf, the resulting tree will have a different root — as illustrated below, `D3` was changed to `D7` which cascades into a different final hash.
-
-
-
-Merkle proofs are similar, but instead of sharing the whole file, the tree alone can be shared, if the verifier and the provider have different trees, it means somewhere down the tree there’s a difference!
-
-With this in mind, you can challenge storage providers holding a given file by selecting a random data leaf (i.e. a piece of the file) and sending them a random number — as illustrated in the figure below, the random number `R` is concatenated to the sector `D3` — the idea is, the storage provider cannot guess the random number, so, if they’re able to generate a tree with the random number, it must mean they have the file!
-
-
-
-At the same time you, the challenger, must also build a Merkle tree on your end, you do this by receiving the random data leaf that you selected and building the tree, in the end, if they do not match, the storage provider is cheating you!
-
-This is illustrated in the following picture, you — the verifier — just received the leaf `D3` and the random number `R`, you concatenate them together (`D3||R`) and you’re ready to recompute the tree. In red, you find the nodes directly affected by the change, these are the ones that **must** be recomputed — of course, you can always compute the tree from scratch; using `h(D1), h(D2), ...` but that is wasteful as the nodes marked in blue and their children did not change, and as such, you can just reuse them.
-
-
-
-This approach is great for small batches of data, it does not require any special hardware and can be implemented with different kinds of hashes, providing flexibility and lowering entry requirements for storage providers.
-
-However, this approach does not scale well because the challenger is required to receive both the Merkle tree *and* the random data leaf that is being challenged — as illustrated below.
-
-
-
-Now, consider that you need to do this over and over while you keep the file, over time, all those data blocks start accumulating and you end up transferring a lot of data over the network! Furthermore, you can't just request any size of data; if it is too small, the storage provider may cheat and brute force a solution; if it is too big, the transfer may take too long to be practical.
-
-### Zero Knowledge Proofs
-
-So, we've established that we can't transfer much data through the network, but it cannot be possible to brute force a solution. That's where Filecoin's solution comes in, they had the brilliant insight that you can use the random challenges along with zero-knowledge proofs for that. Filecoin's zero-knowledge proofs are constant in size, easy to verify and hard to fake.
-
-The generation of these proofs do not require special hardware features like trusted platform modules — you can generate a proof using your CPU, however, you will need a GPU if you want to generate proofs for larger files in practical time.
-
-In Filecoin, uploaded files (or deals) are combined into sectors, which the zero knowledge proofs are based on (and [directed acyclic graphs](https://www.youtube.com/watch?v=8_9ONpyRZEI), but we’re not covering that here). At this point Merkle trees are built over the original file and the final sector, both in its unsealed and sealed state. The root of each tree is then used when constructing the replication proof, similarly, the root of the sealed sector is used for the proofs of storage over time.
-
-Proof validation requires fewer resources than the generation step, enabling us to verify the storage proofs inside the Polkadot runtime. Their small size translates to less stress on the network; for example, Filecoin proofs can go from 192 bytes to a few KB in size, in comparison, a 1080p video frame, encoded using H.264 will be between 100 and 500 KB — note that video will usually be streamed at 24 frames per second or higher, the proof size pales in comparison!
-
-#### Relevant Links
-
-*
-*
diff --git a/docs/src/introduction.md b/docs/src/introduction.md
index b44c0b0f1..2532ce57b 100644
--- a/docs/src/introduction.md
+++ b/docs/src/introduction.md
@@ -1,12 +1,12 @@
# Introduction
-Welcome to the Polka Storage project!
+Welcome to the Polka Storage project book. This document is a work in progress and will be constantly updated.
-This project aims to deliver a Polkadot-native system parachain for data storage.
+This project aims to build a native storage network for Polkadot.
-Since the Referendum approval, we've been busy developing the deliverables for Phase 2.
+We've now completed Phase 2 and have started work on Phase 3.
-**For [**Phase 2**](https://polkadot.polkassembly.io/referenda/1150), we have implemented:**
+**During [**Phase 2**](https://polkadot.polkassembly.io/referenda/1150), we have implemented:**
- Storage Provider Pallet
- [`terminate_sectors`](./architecture/pallets/storage-provider.md#terminate_sectors)
@@ -37,7 +37,7 @@ Dedicated CLIs
- [`mater-cli`](./mater-cli/index.md) to convert or extract CARv2 files.
- [`storagext-cli`](./storagext-cli/index.md) to interact **directly** with the parachain — watch out, this is a low-level tool!
-Filecoin actor ports:
+Pallets:
- [Storage Provider](./architecture/pallets/storage-provider.md)
- [Market](./architecture/pallets/market.md)
@@ -50,7 +50,7 @@ Filecoin actor ports:
alt="Polka Storage Client Upload">
-**As a refresher, for [Phase 1](https://polkadot.polkassembly.io/referenda/494), we implemented the following:**
+**During [Phase 1](https://polkadot.polkassembly.io/referenda/494), we implemented the following:**
- Keeping track of [Storage Providers](./glossary.md#storage-provider),
- [Publishing](./architecture/pallets/market.md#publish_storage_deals) Market Deals on-chain,
@@ -69,7 +69,7 @@ We present a demo on how to [store a file](./getting-started/demo-file-store.md)
- OpenGov Referendum - Part 1 —
- OpenGov Referendum - Part 2 —
- Research Report —
-- Polkadot Forum News Post —
+- Weekly dev updates
---