Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recursive Loop Torontonian for faster computation #332

Merged
merged 26 commits into from
Mar 7, 2022

Conversation

GregoryMorse
Copy link
Contributor

Context:

Loop Torontonian computation is less efficient than state of the art. The improvement is based on discussions with @nquesada and @jakeffbulmer .

Description of the Change:

Recursive loop Torontonian is implemented also with numba for optimized code.

Based on loop Torontonian definition: https://arxiv.org/pdf/2202.04600.pdf
Threshold detection statistics of bosonic states
J. F. F. Bulmer, S. Paesani, R. S. Chadwick, and N. Quesada

Combined with paper: https://arxiv.org/pdf/2109.04528.pdf
Polynomial speedup in Torontonian calculation by a
scalable recursive algorithm
Ágoston Kaposi, Zoltán Kolarovszki, Tamás Kozsik, Zoltán Zimborás, and Péter Rakyta

Benefits:

Recursive computation gives improvement from n^2.73552^n to n^1.06952^n, significant improvement in the polynomial time part of the computation.

Possible Drawbacks:

None, beyond making the library slightly more complicated by introducing a new default value to use the new faster computation method.

Related GitHub Issues:

@GregoryMorse
Copy link
Contributor Author

Although matrices are Hermitian, all Torontonian computations use a determinant (recursive via Cholesky) that could end up with imaginary components and in the Cholesky these imaginary components can end up on the main diagonal. In regular Torontonian, the imaginary can be disregarded but in loop Torontonian it could have significance. The best idea to fix this is to simply correct the determinant to use only its stable and real component, as well as for the Cholesky, to correct the main diagonals similarly. The imaginary components are extremely small and eventually cause underflows which can lead to overflows and NaNs in the matrix, etc. The stability of the computation requires that all implementations of Torontonian should simultaneously fix this. Hence the reason the test is failing.

@nquesada
Copy link
Collaborator

Hey @GregoryMorse --- Thanks for this super nice addition. For the cases of interest where the torontonians correspond to physical covariance matrices, one can guarantee that all the determinants appearing in the torontonian are non-negative.

@GregoryMorse
Copy link
Contributor Author

GregoryMorse commented Feb 13, 2022

Hey @GregoryMorse --- Thanks for this super nice addition. For the cases of interest where the torontonians correspond to physical covariance matrices, one can guarantee that all the determinants appearing in the torontonian are non-negative.

@nquesada That makes sense. However the imaginary components must be eliminated from the determinants and Cholesky diagonals to preserve numeric stability as far as I can tell. A simply np.real(value)+0j should eliminate any absurdly small imaginary components.

I think we can make a nice test formula giving powers of 2 for loop Torontonian using the simple $exp { 2n\Pi*i }$ where we can range $n$ in an interval for proper testing of the gamma parameter. Probably we can back port that test also for the non-loop Torontonian functions. Actually on further thought, we won't be able to use Euler's identity as it seems the exponent is always real and the full computation is as well. So we should just get rid of all the imaginary components altogether, its making all 4 implementations too unstable. For testing maybe can find a case where the exponent is $ln x$.

@GregoryMorse
Copy link
Contributor Author

It seems unclear to me if the (loop) Torontonian is defined domain specifically or in a general mathematical sense. Though I realize the library has a clear domain, the permanent and (loop) Hafnian functions can certainly be defined in very general mathematical sense though the library considers the real/complex domains with the necessary symmetric restriction on the Hafnian. If the domain of the input of Torontonian is a Hermitian matrix, I actually think its still too broad. The paper here had probably the most correct definition: https://arxiv.org/pdf/1807.01639.pdf. Although mathematically, the computation can be done on matrices that go beyond the scope of this definition, but not as broad as general Hermitian. If we use the paper definition, in this case, all the test code we have written to date in test cases, tutorials, etc, might very well violate the criterion.

This needs to be clarified. I would prefer the broadest mathematical domain but its not necessary, just might be elegant.

Until its clear, resolving the imaginary component instability is actually just adding more domain restrictions. Which means the domains of the recursive and non-recursive functions will be different. Did not realize I was going to have to get into this, most programmers would try to escape these situations but I think it has to be or at least should be confronted now as it gives very good documentation and stability to the library.

@nquesada
Copy link
Collaborator

Hey @GregoryMorse --- at least in the original derivation of the torontonian and loop torontonian these functions are only meant to be evaluated in matrices O derived from valid xp-covariance matrices V that satisfy V + i \Omega >=0 with \Omega = [[0, I],[-I, 0]] the symplectic form. The connection between V and O is spelled out in https://arxiv.org/pdf/1807.01639.pdf.

For these matrices, it is guaranteed that every submatrix O_k appearing in the evaluation of a torontonian or loop torontonian is such that I-O_k is positive semidefinite.

@GregoryMorse
Copy link
Contributor Author

Hey @GregoryMorse --- at least in the original derivation of the torontonian and loop torontonian these functions are only meant to be evaluated in matrices O derived from valid xp-covariance matrices V that satisfy V + i \Omega >=0 with \Omega = [[0, I],[-I, 0]] the symplectic form. The connection between V and O is spelled out in https://arxiv.org/pdf/1807.01639.pdf.

For these matrices, it is guaranteed that every submatrix O_k appearing in the evaluation of a torontonian or loop torontonian is such that I-O_k is positive semidefinite.

@nquesada Yes it seems that though anything with only non-zero determinants might make the non-recursive versions more general (just precluding divide by 0s), I think we should stick to your paper definition. I have visited the definitions from both the Torontonian and Loop Torontonian papers as well as the recursive Torontonian paper. This will be all resolved shortly - I've been looking at this too broadly in mathematical sense. Mainly because the permanent and Hafnian and loop Hafnian have some important interpretations on graphs, etc. But for the Torontonian, I think its application makes sense exactly the way the paper defines it. Even the gamma values are well defined. Until a realistic application outside our context exists, its probably a mistake and just unnecessarily complicating things to take such a perspective. So the simplest numerical stability adjustments
based on the properties you have mentioned will be used.

@codecov
Copy link

codecov bot commented Feb 14, 2022

Codecov Report

Merging #332 (be5c5f6) into master (cf3f887) will not change coverage.
The diff coverage is 100.00%.

@@            Coverage Diff            @@
##            master      #332   +/-   ##
=========================================
  Coverage   100.00%   100.00%           
=========================================
  Files           24        24           
  Lines         1723      1726    +3     
=========================================
+ Hits          1723      1726    +3     
Impacted Files Coverage Δ
thewalrus/__init__.py 100.00% <ø> (ø)
thewalrus/_torontonian.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update cf3f887...be5c5f6. Read the comment docs.

@GregoryMorse
Copy link
Contributor Author

So codefactor seems to have an outdated init.py. the numeric stability has been done.

The remaining issue had nothing to do with that. Namely passing a matrix of float64 and a gamma vector of complex128 won't compile. I think the data types should be better defined. ltor could at least check they are matching. Numba_ltor converts always to complex128 which isn't in the spirit of preserving user types as done elsewhere in rhe library

Copy link
Contributor

@thisac thisac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @GregoryMorse for this contribution! Looks great on the side of code quality. Just had a few very minor comments. 💯

.github/CHANGELOG.md Outdated Show resolved Hide resolved
thewalrus/_torontonian.py Outdated Show resolved Hide resolved
thewalrus/_torontonian.py Show resolved Hide resolved
thewalrus/_torontonian.py Outdated Show resolved Hide resolved
thewalrus/_torontonian.py Outdated Show resolved Hide resolved
GregoryMorse and others added 5 commits February 15, 2022 16:51
Co-authored-by: Theodor <theodor.isacsson@gmail.com>
Co-authored-by: Theodor <theodor.isacsson@gmail.com>
Co-authored-by: Theodor <theodor.isacsson@gmail.com>
Co-authored-by: Theodor <theodor.isacsson@gmail.com>
@GregoryMorse
Copy link
Contributor Author

I had to remove the numerator/top being made real as it was not done consistently in the recursive version. I think its a question for @nquesada - can the exp(0.5gammaAz*gamma^\dagger) have an imaginary component which is significant? The numba_vac_prob for vaccum probabilities already takes real components and delays until after square root. It seems pretty clear that we do not yet have a consistent practical method for getting numeric stability. The ideas are here but for a library like thewalrus we should aim for the highest quality reference-like implementation. My opinion is to remove the imaginary component as soon as its possible - before the square root. As we know the real part can never be negative, and the imaginary part might start corrupting the accuracy of the square root computation.

Do we have any examples already of realistic alpha values so we can generate realistic gamma values and do proper testing?

Also I feel the documentation should be improved with formula, more details, etc. Specifically here: https://github.com/XanaduAI/thewalrus/blob/master/docs/gbs.rst - notice on this page that "We first define the following useful quantities: \bm{X} &= ". Apparently it is not that useful there as X is never mentioned again after being defined :). It seems many things were removed from this page and it was more informative in past versions, if I recall correctly.

@nquesada
Copy link
Collaborator

One very minor comment, could we add a test checking that the recursive and non-recursive version of ltor agree?

@GregoryMorse
Copy link
Contributor Author

GregoryMorse commented Feb 19, 2022

One very minor comment, could we add a test checking that the recursive and non-recursive version of ltor agree?

@nquesada We are already doing this albeit a little indirectly in test_tor_and_threshold_displacement_prob_agree(n_modes):

def test_tor_and_threshold_displacement_prob_agree(n_modes):
    """Tests that threshold_detection_prob, ltor and the usual tor expression all agree
    when displacements are zero"""
    cv = random_covariance(n_modes)
    mu = np.zeros([2 * n_modes])

The comparison is done based on division by what is effectively a constant.

The real problem is that mu = np.zeros([2 * n_modes]). The paper says mu is calculated from the input matrix times a vector $\alpha$. The question is what is the domain of $\alpha$ and so we can generate random vectors and have proper loop Torontonian verification. I can't find any generic example of this. I don't know if $\alpha$ is real or complex, positive or negative, normalized or not, etc.

@nquesada
Copy link
Collaborator

Any vector of even length were the first half of the elements is the conjugate of the second half is OK.

@GregoryMorse
Copy link
Contributor Author

GregoryMorse commented Feb 19, 2022

Any vector of even length were the first half of the elements is the conjugate of the second half is OK.

@nquesada Thank you - very useful, makes sense! :) Also fixed an important bug with conjugating gamma before solving the triangular. I'm again thinking we should take only the real component of vA^-1v^T in the non-recursive version before doing np.exp(0.5*value) - it would make them very consistent.

@nquesada
Copy link
Collaborator

Other than fixing issues with the docs this PR is ready to go.

@GregoryMorse
Copy link
Contributor Author

Other than fixing issues with the docs this PR is ready to go.

Technically my testing is not constructing the gamma vector correctly. I realized we should use the inverse of the covariance matrix but we already have identity matrix minus it. Plus we should conjugate it. Although for numeric testing this is not really an issue. Just to give good realistic example usage. In the paper its even more complicated as the covariance matrix should be constructed from the alpha vector which itself comes from creation and annihilation operators. So the test code is not a valid practical use example - if its an issue I can fix it easily. But a practical use example sounds more like a documentation issue.

@GregoryMorse
Copy link
Contributor Author

@thisac Can we get your approval or a re-review?

Copy link
Contributor

@thisac thisac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @GregoryMorse, and sorry for the delay. Just had one very minor comment on the reference. The additions look great! I can override the codefactor issue and merge it when needed. 💯

docs/references.bib Outdated Show resolved Hide resolved
docs/references.bib Outdated Show resolved Hide resolved
@nquesada nquesada self-requested a review March 5, 2022 02:18
@nquesada
Copy link
Collaborator

nquesada commented Mar 5, 2022

The CodeFactor bot is acting up. This is ready to be merged. Thanks @GregoryMorse for this contribution!

@thisac thisac merged commit 606079a into XanaduAI:master Mar 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants