Skip to content

Commit

Permalink
Update classical_shadows.ipynb
Browse files Browse the repository at this point in the history
  • Loading branch information
PeilinZHENG committed Aug 9, 2023
1 parent d7281c1 commit b18c046
Showing 1 changed file with 10 additions and 10 deletions.
20 changes: 10 additions & 10 deletions docs/source/tutorials/classical_shadows.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
{
"cell_type": "markdown",
"source": [
"[Classical shadows](https://www.nature.com/articles/s41567-020-0932-7) formalism is an efficient method to estimate multiple observables. In this tutorial, we will show how to use the `shadows` module in `TensorCircuit` to implement classic shadows in Pauli basis."
"[Classical shadows](https://www.nature.com/articles/s41567-020-0932-7) formalism is an efficient method to estimate multiple observables. In this tutorial, we will show how to use the ``shadows`` module in ``TensorCircuit`` to implement classic shadows in Pauli basis."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -134,14 +134,14 @@
{
"cell_type": "markdown",
"source": [
"We first set the number of qubits $n$ and the number of repeated measurements $r$ on each Pauli string. Then from the target observable Pauli strings $\\{O_i|i=1,\\cdots,M\\}$ (0, 1, 2, and 3 correspond to $\\mathbb{I}$, $X$, $Y$, and $Z$, respectively), the error $\\epsilon$ and the rate of failure $\\delta$, we can use `shadow_bound` function to get the total number of snapshots $N$ and the number of equal parts $K$ to split the shadow snapshot states to compute the median of means:\n",
"We first set the number of qubits $n$ and the number of repeated measurements $r$ on each Pauli string. Then from the target observable Pauli strings $\\{O_i|i=1,\\cdots,M\\}$ (0, 1, 2, and 3 correspond to $\\mathbb{I}$, $X$, $Y$, and $Z$, respectively), the error $\\epsilon$ and the rate of failure $\\delta$, we can use ``shadow_bound`` function to get the total number of snapshots $N$ and the number of equal parts $K$ to split the shadow snapshot states to compute the median of means:\n",
"$$\n",
"\\begin{eqnarray}\n",
" K&=&2\\log(2M/\\delta),\\\\\n",
" N&=&K\\frac{34}{\\epsilon^2}\\max_{1\\le i\\le M}\\left\\|O_i-\\frac{\\text{Tr}(O_i)}{2^n}\\mathbb{I}\\right\\|^2_{\\text{shadow}}=K\\frac{34}{\\epsilon^2}3^{\\max_{1\\le i\\le M}k_i},\n",
"\\end{eqnarray}\n",
"$$\n",
"where $k_i$ is the number of nontrivial Pauli matrices in $O_i$. Please refer to the Theorem S1 and Lemma S3 in [Huang, Kueng and Preskill (2020)](https://www.nature.com/articles/s41567-020-0932-7) for the details of proof. It should be noted that `shadow_bound` has a certain degree of overestimation of $N$, and so many measurements are not really needed in practice. And `shadow_bound` is not jitable and no need to jit."
"where $k_i$ is the number of nontrivial Pauli matrices in $O_i$. Please refer to the Theorem S1 and Lemma S3 in [Huang, Kueng and Preskill (2020)](https://www.nature.com/articles/s41567-020-0932-7) for the details of proof. It should be noted that ``shadow_bound`` has a certain degree of overestimation of $N$, and so many measurements are not really needed in practice. Moreover, ``shadow_bound`` is not jitable and no need to jit."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -226,7 +226,7 @@
{
"cell_type": "markdown",
"source": [
"We randomly generate Pauli strings. Since the function after just-in-time (jit) compilation does not support random sampling, we need to generate all random states in advance, that is, variable `status`."
"We randomly generate Pauli strings. Since the function after just-in-time (jit) compilation does not support random sampling, we need to generate all random states in advance, that is, variable ``status``."
],
"metadata": {
"collapsed": false
Expand All @@ -251,7 +251,7 @@
{
"cell_type": "markdown",
"source": [
"If `measurement_only`=True, the outputs of `shadow_snapshots` are snapshot bit strings $b=s_1\\cdots s_n,\\ s_j\\in\\{0,1\\}$, otherwise the outputs are snapshot states $\\{u_{j}^{\\dagger}|s_j\\rangle\\langle s_j| u_j\\ |j=1,\\cdots,n\\}$. If you only need to generate one batch of snapshots or generate multiple batches of snapshots with different `nps` or `r`, jit cannot provide speedup. JIT will only accelerate when the same shape of snapshots are generated multiple times."
"If ``measurement_only=True`` (default ``False``), the outputs of ``shadow_snapshots`` are snapshot bit strings $b=s_1\\cdots s_n,\\ s_j\\in\\{0,1\\}$, otherwise the outputs are snapshot states $\\{u_{j}^{\\dagger}|s_j\\rangle\\langle s_j| u_j\\ |j=1,\\cdots,n\\}$. If you only need to generate one batch of snapshots or generate multiple batches of snapshots with different ``nps`` or ``r``, jit cannot provide speedup. JIT will only accelerate when the same shape of snapshots are generated multiple times."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -300,7 +300,7 @@
{
"cell_type": "markdown",
"source": [
"Since the operation of taking the median is not jitable, the outputs of `expectation_ps_shadows` have $K$ values, and we need to take the median of them."
"Since the operation of taking the median is not jitable, the outputs of ``expectation_ps_shadows`` have $K$ values, and we need to take the median of them."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -328,7 +328,7 @@
{
"cell_type": "markdown",
"source": [
"It can be seen from the running time that every time the number of Pauli strings changes, `shadow_expec` will be recompiled, but for the same number of Pauli strings but different observables, `shadow_expec` will only be compiled once. In the end, the absolute errors given by classical shadows are much smaller than the $\\epsilon=0.1$ we set, so `shadow_bound` gives a very loose upper bound."
"It can be seen from the running time that every time the number of Pauli strings changes, ``shadow_expec`` will be recompiled, but for the same number of Pauli strings but different observables, ``shadow_expec`` will only be compiled once. In the end, the absolute errors given by classical shadows are much smaller than the $\\epsilon=0.1$ we set, so ``shadow_bound`` gives a very loose upper bound."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -518,7 +518,7 @@
{
"cell_type": "markdown",
"source": [
"We can also use classical shadows to calculate entanglement entropy. `entropy_shadow` first reconstructs the reduced density matrix, then solves the eigenvalues and finally calculates the entanglement entropy from non-negative eigenvalues. Since the time and space complexity of reconstructing the density matrix is exponential with respect to the system size, this method is only efficient when the reduced system size is constant. `entropy_shadow` is jitable, but it will only accelerate when the reduced sub systems have the same shape."
"We can also use classical shadows to calculate entanglement entropy. ``entropy_shadow`` first reconstructs the reduced density matrix, then solves the eigenvalues and finally calculates the entanglement entropy from non-negative eigenvalues. Since the time and space complexity of reconstructing the density matrix is exponential with respect to the system size, this method is only efficient when the reduced system size is constant. ``entropy_shadow`` is jitable, but it will only accelerate when the reduced sub systems have the same shape."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -594,7 +594,7 @@
" \\text{Tr}(\\rho_A^2)&=&2^k\\sum_{b,b'\\in\\{0,1\\}^k}(-2)^{-H(b,b')}\\overline{P(b)P(b')},\n",
"\\end{eqnarray}\n",
"$$\n",
"where $A$ is the $k$-d reduced system, $H(b,b')$ is the Hamming distance between $b$ and $b'$, $P(b)$ is the probability for measuring $\\rho_A$ and obtaining the outcomes $b$ thus we need a larger $r$ to obtain a good enough priori probability, and the overline means the average on all random selected Pauli strings. Please refer to [Brydges, et al. (2019)](https://www.science.org/doi/full/10.1126/science.aau4963) for more details. We can use `renyi_entropy_2` to implement this method, but it is not jitable because we need to build the dictionary based on the bit strings obtained by measurements, which is a dynamical process. Compared with `entropy_shadow`, it cannot filter out non-negative eigenvalues, so the accuracy is slightly worse."
"where $A$ is the $k$-d reduced system, $H(b,b')$ is the Hamming distance between $b$ and $b'$, $P(b)$ is the probability for measuring $\\rho_A$ and obtaining the outcomes $b$ thus we need a larger $r$ to obtain a good enough priori probability, and the overline means the average on all random selected Pauli strings. Please refer to [Brydges, et al. (2019)](https://www.science.org/doi/full/10.1126/science.aau4963) for more details. We can use ``renyi_entropy_2`` to implement this method, but it is not jitable because we need to build the dictionary based on the bit strings obtained by measurements, which is a dynamical process. Compared with ``entropy_shadow``, it cannot filter out non-negative eigenvalues, so the accuracy is slightly worse."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -657,7 +657,7 @@
{
"cell_type": "markdown",
"source": [
"We can use `global_shadow_state`, `global_shadow_state1` or `global_shadow_state2` to reconstruct the density matrix. These three functions use different methods, but the results are exactly the same. All functions are jitable, but since we only use each of them once here, they are not wrapped. In terms of implementation details, `global_shadow_state` uses `kron` and is recommended, the other two use `einsum`."
"We can use ``global_shadow_state``, ``global_shadow_state1`` or ``global_shadow_state2`` to reconstruct the density matrix. These three functions use different methods, but the results are exactly the same. All functions are jitable, but since we only use each of them once here, they are not wrapped. In terms of implementation details, ``global_shadow_state`` uses ``kron`` and is recommended, the other two use ``einsum``."
],
"metadata": {
"collapsed": false
Expand Down

0 comments on commit b18c046

Please sign in to comment.