Skip to content

Commit

Permalink
Typos and grammar fixes
Browse files Browse the repository at this point in the history
  • Loading branch information
fulldecent committed Feb 16, 2018
1 parent 03b5be8 commit cc5724d
Showing 1 changed file with 7 additions and 7 deletions.
14 changes: 7 additions & 7 deletions Paper.tex
Original file line number Diff line number Diff line change
Expand Up @@ -503,7 +503,7 @@ \subsubsection{Block Header Validity}
H'_{\mathrm{i}} &\equiv \max(H_{\mathrm{i}} - 3000000, 0)
\end{align}

Note that $\mindifficulty$ is the difficulty of the genesis block. The \textit{Homestead} difficulty parameter, $\homesteadmod$, is used to affect a dynamic homeostasis of time between blocks, as the time between blocks varies, as discussed below, as implemented in EIP-2 by \cite{EIP-2}. In the Homestead release, the exponential difficulty symbol, $\expdiffsymb$ causes the difficulty to slowly increase (every 100,000 blocks) at an exponential rate, and thus increasing the block time difference, and putting time pressure on transitioning to proof-of-stake. This effect, known as the "difficulty bomb", or "ice age", was explained in EIP-649 by \cite{EIP-649} and delayed and implemented earlier in EIP-2. $\homesteadmod$ was also modified in EIP-100 with the use of $x$, the adjustment factor above, and the demoninator 9, in order to target the mean block time including uncle blocks by \cite{EIP-100}. Finally, in the Byzantium release, with EIP-649, the ice age was delayed by creating a fake block number, $H'_{\mathrm{i}}$, which is obtained by substracting three million from the actual block number, which in other words reduced $\expdiffsymb$ and the time difference between blocks, in order to allow more time to develop proof-of-stake and preventing the network from "freezing" up.
Note that $\mindifficulty$ is the difficulty of the genesis block. The \textit{Homestead} difficulty parameter, $\homesteadmod$, is used to affect a dynamic homeostasis of time between blocks, as the time between blocks varies, as discussed below, as implemented in EIP-2 by \cite{EIP-2}. In the Homestead release, the exponential difficulty symbol, $\expdiffsymb$ causes the difficulty to slowly increase (every 100,000 blocks) at an exponential rate, and thus increasing the block time difference, and putting time pressure on transitioning to proof-of-stake. This effect, known as the "difficulty bomb", or "ice age", was explained in EIP-649 by \cite{EIP-649} and delayed and implemented earlier in EIP-2. $\homesteadmod$ was also modified in EIP-100 with the use of $x$, the adjustment factor above, and the denominator 9, in order to target the mean block time including uncle blocks by \cite{EIP-100}. Finally, in the Byzantium release, with EIP-649, the ice age was delayed by creating a fake block number, $H'_{\mathrm{i}}$, which is obtained by subtracting three million from the actual block number, which in other words reduced $\expdiffsymb$ and the time difference between blocks, in order to allow more time to develop proof-of-stake and preventing the network from "freezing" up.

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

Ah, I can't even tell what was changed.

This comment has been minimized.

Copy link
@nicksavers

nicksavers Feb 16, 2018

Contributor

substracting to subtracting

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

Whoops, that was my bad.


\hypertarget{block_gas_limit_H__l}{}The canonical gas limit $H_{\mathrm{l}}$ of a block of header $H$ must fulfil the relation:
\begin{eqnarray}
Expand All @@ -525,7 +525,7 @@ \subsubsection{Block Header Validity}
\end{equation}
with $(n, m) = \mathtt{PoW}(H_{\hcancel{n}}, H_{\mathrm{n}}, \mathbf{d})$.

\hypertarget{block_header_without_nonce_and_mixmash_h__cancel_n}{}\linkdest{H_cancel_n}Where $H_{\hcancel{n}}$ is the new block's header $H$, but \textit{without} the nonce and mix-hash components, $\mathbf{d}$ being the current DAG, a large data set needed to compute the mix-hash, and $\mathtt{PoW}$ is the proof-of-work function (see section \ref{ch:pow}): this evaluates to an array with the first item being the mix-hash, to proof that a correct DAG has been used, and the second item being a pseudo-random number cryptographically dependent on $H$ and $\mathbf{d}$. Given an approximately uniform distribution in the range $[0, 2^{64})$, the expected time to find a solution is proportional to the difficulty, $H_{\mathrm{d}}$.
\hypertarget{block_header_without_nonce_and_mixmash_h__cancel_n}{}\linkdest{H_cancel_n}Where $H_{\hcancel{n}}$ is the new block's header $H$, but \textit{without} the nonce and mix-hash components, $\mathbf{d}$ being the current DAG, a large data set needed to compute the mix-hash, and $\mathtt{PoW}$ is the proof-of-work function (see section \ref{ch:pow}): this evaluates to an array with the first item being the mix-hash, to prove that a correct DAG has been used, and the second item being a pseudo-random number cryptographically dependent on $H$ and $\mathbf{d}$. Given an approximately uniform distribution in the range $[0, 2^{64})$, the expected time to find a solution is proportional to the difficulty, $H_{\mathrm{d}}$.

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

Same here, can't spot the diff.

This comment has been minimized.

Copy link
@nicksavers

nicksavers Feb 16, 2018

Contributor

to proof to to prove

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

That's weird, this was already merged in #381, but somehow it wasn't. I wonder how that happened.


This is the foundation of the security of the blockchain and is the fundamental reason why a malicious node cannot propagate newly created blocks that would otherwise overwrite (``rewrite'') history. Because the nonce must satisfy this requirement, and because its satisfaction depends on the contents of the block and in turn its composed transactions, creating new, valid, blocks is difficult and, over time, requires approximately the total compute power of the trustworthy portion of the mining peers.

Expand Down Expand Up @@ -563,7 +563,7 @@ \section{Transaction Execution} \label{ch:transactions}
\item The transaction is well-formed RLP, with no additional trailing bytes;
\item the transaction signature is valid;
\item the \hyperlink{transaction_nonce}{transaction nonce} is valid (equivalent to the \hyperlink{account_nonce}{sender account's current nonce});
\item the gas limit is no smaller than the intrinsic gas, $g_0$, used by the transaction;
\item the gas limit is no smaller than the intrinsic gas, $g_0$, used by the transaction; and
\item the sender account balance contains at least the cost, $v_0$, required in up-front payment.
\end{enumerate}

Expand Down Expand Up @@ -1223,7 +1223,7 @@ \subsection{Data Feeds}
The general pattern involves a single contract within Ethereum which, when given a message call, replies with some timely information concerning an external phenomenon. An example might be the local temperature of New York City. This would be implemented as a contract that returned that value of some known point in storage. Of course this point in storage must be maintained with the correct such temperature, and thus the second part of the pattern would be for an external server to run an Ethereum node, and immediately on discovery of a new block, creates a new valid transaction, sent to the contract, updating said value in storage. The contract's code would accept such updates only from the identity contained on said server.

\subsection{Random Numbers}
Providing random numbers within a deterministic system is, naturally, an impossible task. However, we can approximate with pseudo-random numbers by utilising data which is generally unknowable at the time of transacting. Such data might include the block's hash, the block's timestamp and the block's beneficiary address. In order to make it hard for malicious miner to control those values, one should use the {\small BLOCKHASH} operation in order to use hashes of the previous 256 blocks as pseudo-random numbers. For a series of such numbers, a trivial solution would be to add some constant amount and hashing the result.
Providing random numbers within a deterministic system is, naturally, an impossible task. However, we can approximate with pseudo-random numbers by utilising data which is generally unknowable at the time of transacting. Such data might include the block's hash, the block's timestamp and the block's beneficiary address. In order to make it hard for malicious miners to control those values, one should use the {\small BLOCKHASH} operation in order to use hashes of the previous 256 blocks as pseudo-random numbers. For a series of such numbers, a trivial solution would be to add some constant amount and hashing the result.

\section{Future Directions} \label{ch:future}

Expand Down Expand Up @@ -1382,7 +1382,7 @@ \section{Hex-Prefix Encoding}\label{app:hexprefix}
Thus the high nibble of the first byte contains two flags; the lowest bit encoding the oddness of the length and the second-lowest encoding the flag $t$. The low nibble of the first byte is zero in the case of an even number of nibbles and the first nibble in the case of an odd number. All remaining nibbles (now an even number) fit properly into the remaining bytes.

\section{Modified Merkle Patricia Tree}\label{app:trie}\hypertarget{trie}{}
The modified Merkle Patricia tree (trie) provides a persistent data structure to map between arbitrary-length binary data (byte arrays). It is defined in terms of a mutable data structure to map between 256-bit binary fragments and arbitrary-length binary data, typically implemented as a database. The core of the trie, and its sole requirement in terms of the protocol specification is to provide a single value that identifies a given set of key-value pairs, which may be either a 32 byte sequence or the empty byte sequence. It is left as an implementation consideration to store and maintain the structure of the trie in a manner that allows effective and efficient realisation of the protocol.
The modified Merkle Patricia tree (trie) provides a persistent data structure to map between arbitrary-length binary data (byte arrays). It is defined in terms of a mutable data structure to map between 256-bit binary fragments and arbitrary-length binary data, typically implemented as a database. The core of the trie, and its sole requirement in terms of the protocol specification is to provide a single value that identifies a given set of key-value pairs, which may be either a 32-byte sequence or the empty byte sequence. It is left as an implementation consideration to store and maintain the structure of the trie in a manner that allows effective and efficient realisation of the protocol.

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

I can't tell here.

This comment has been minimized.

Copy link
@nicksavers

nicksavers Feb 16, 2018

Contributor

32 byte to 32-byte


Formally, we assume the input value $\mathfrak{I}$, a set containing pairs of byte sequences:
\begin{equation}
Expand Down Expand Up @@ -2335,7 +2335,7 @@ \section{Genesis Block}\label{app:genesis}\hypertarget{Genesis_Block}{}
\big( \big( 0_{256}, \mathtt{\tiny KEC}\big(\mathtt{\tiny RLP}\big( () \big)\big), 0_{160}, stateRoot, 0, 0, 0_{2048}, 2^{17}, 0, 0, 3141592, time, 0, 0_{256}, \mathtt{\tiny KEC}\big( (42) \big) \big), (), () \big)
\end{equation}

Where $0_{256}$ refers to the parent hash, a 256-bit hash which is all zeroes; $0_{160}$ refers to the beneficiary address, a 160-bit hash which is all zeroes; $0_{2048}$ refers to the log bloom, 2048-bit of all zeros; $2^{17}$ refers to the difficulty; the transaction trie root, receipt trie root, gas used, block number and extradata are both $0$, being equivalent to the empty byte array. The sequences of both ommers and transactions are empty and represented by $()$. $\mathtt{\tiny KEC}\big( (42) \big)$ refers to the Keccak hash of a byte array of length one whose first and only byte is of value 42, used for the nonce. $\mathtt{\tiny KEC}\big(\mathtt{\tiny RLP}\big( () \big)\big)$ value refers to the hash of the ommer lists in RLP, both empty lists.
Where $0_{256}$ refers to the parent hash, a 256-bit hash which is all zeroes; $0_{160}$ refers to the beneficiary address, a 160-bit hash which is all zeroes; $0_{2048}$ refers to the log bloom, 2048-bit of all zeros; $2^{17}$ refers to the difficulty; the transaction trie root, receipt trie root, gas used, block number and extradata are both $0$, being equivalent to the empty byte array. The sequences of both ommers and transactions are empty and represented by $()$. $\mathtt{\tiny KEC}\big( (42) \big)$ refers to the Keccak hash of a byte array of length one whose first and only byte is of value 42, used for the nonce. $\mathtt{\tiny KEC}\big(\mathtt{\tiny RLP}\big( () \big)\big)$ value refers to the hash of the ommer list in RLP, both empty lists.

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

Again, I can't spot the diff.

This comment has been minimized.

Copy link
@nicksavers

nicksavers Feb 16, 2018

Contributor

The hash in a block header refers to a single ommer list

This comment has been minimized.

Copy link
@fulldecent

fulldecent Feb 16, 2018

Author Contributor

@nicksavers Now you're just showing off. Are you good at those cross-your-eyes-and-look-at-the-poster things too?

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

Haha, OK, OK. I didn't spend much time on each one, I was hoping someone would tell me instead. ;)

This comment has been minimized.

Copy link
@jamesray1

jamesray1 Feb 16, 2018

Contributor

Github isn't great at picking up diffs within long paragraphs.


The proof-of-concept series include a development premine, making the state root hash some value $stateRoot$. Also $time$ will be set to the initial timestamp of the genesis block. The latest documentation should be consulted for those values.

Expand Down Expand Up @@ -2413,7 +2413,7 @@ \subsubsection{Cache}
\begin{equation}
n = \left\lfloor\frac{c_{size}}{J_{hashbytes}}\right\rfloor
\end{equation}
The cache is calculated by performing $J_{cacherounds}$ rounds of the RandMemoHash algorithm to the inital cache $\mathbf{c'}$:
The cache is calculated by performing $J_{cacherounds}$ rounds of the RandMemoHash algorithm to the initial cache $\mathbf{c'}$:
\begin{equation}
\mathbf{c} = E_{cacherounds}(\mathbf{c'}, J_{cacherounds})
\end{equation}
Expand Down

0 comments on commit cc5724d

Please sign in to comment.