-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make QuantumState.sample_counts faster, O(1) rather than O(N) #8535
Comments
StateVector.sample_counts
TL;DR There is a single numpy function that does what we want. We should use it. A discrete distribution defined by density However, just as drawing many sample from a Bernoulli distribution is equivalent to drawing one from the associated binomial distribution (i.e. the coin flipping example mentioned above.), drawing many samples from a generalized Bernoulli distribution is equivalent to a single sample from a multinomial distribution. numpy has a function for this. I didn't look at the code, but it is easy to see that it is using an Here is the same example I gave above: In [1]: from numpy.random import default_rng
In [2]: import numpy as np
In [3]: probs = np.array([.1, .2, .4, .3])
In [4]: rng = default_rng()
In [5]: rng.multinomial(100_000_000_000, probs)
Out[5]: array([10000046586, 20000067875, 40000025644, 29999859895])
In [6]: %timeit rng.multinomial(100_000_000_000, probs)
992 ns ± 10.6 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each) EDIT: It is exactly the same algorithm that I have above. It is pure C, but there is a bit of overhead somewhere (or maybe the rng implementation is limiting.) void random_multinomial(bitgen_t *bitgen_state, RAND_INT_TYPE n,
RAND_INT_TYPE *mnix, double *pix, npy_intp d,
binomial_t *binomial) {
double remaining_p = 1.0;
npy_intp j;
RAND_INT_TYPE dn = n;
for (j = 0; j < (d - 1); j++) {
mnix[j] = random_binomial(bitgen_state, pix[j] / remaining_p, dn, binomial);
dn = dn - mnix[j];
if (dn <= 0) {
break;
}
remaining_p -= pix[j];
}
if (dn > 0) {
mnix[d - 1] = dn;
}
} Note the check to see if any more samples are remaining to draw; I neglected to do that in the Julia version. julia> const multd = Multinomial(10^12, [0.1, 0.2, 0.4, 0.3]);
julia> print(rand(multd))
[99999679158, 200000111995, 400000751567, 299999457280]
julia> @btime rand(multd)
186.036 ns (1 allocation: 96 bytes) numpy uses Pcg64 by default and Julia uses Xoshiro256++. Casual comments online indicate that the latter is faster. It's not yet in numpy. A test with a probability vector of length 100 rather than 4, shows numpy only 20% slower than Julia. This scaling in numpy performance is very common. It usually indicates that the worse performance with smaller arrays is due to overhead in the python/numpy interface. |
The current implementation of sampling$N$ shot "counts" from $O(N)$ . But there is a method that is $O(1)$ .
QuantumState
isEDIT: See #8618 for a more informed take on this issue.
EDIT: I didn't realize when writing this that it has already been discussed in various places in Qiskit. And even implemented (else where, not in
QuantumState
) for example #8137The problem
Specifically, one should sample directly from the distribution of the number of counts for each index rather than sampling from the distribution of indices and then building a count map. The current method is$O(N)$ in both time and space (although the latter is a constraint due to Python, not the algorithm itself.) The method proposed here is $O(1)$ in both time and space.
Currently, to build a count map of
N
samples of measuring the state vector we:Statevector
. (This step varies depending on the subclass ofQuantumState
.)N
samples from this discrete distributioncounts
where each key is an integer indexi
, andcounts[i]
is the number of timesi
occurred in the list generated in stepThe solution
EDIT: See the next comment for a numpy builtin function that does this solution.
A similar, but simpler problem is sampling the number of heads in$N$ coin tosses. You could simulate $N$ coin tosses and count the number of heads. But it is faster to instead sample once from the binomial distribution.
For our sampling problem:
I didn't code this in Python. But the Julia code below can be easily translated.
Here is the code:
The text was updated successfully, but these errors were encountered: