Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add gauss function for general weight functions #41

Merged
merged 1 commit into from
Dec 5, 2019
Merged

Conversation

stevengj
Copy link
Member

@stevengj stevengj commented Dec 5, 2019

This PR generalizes the QuadGK.gauss function to accept a function parameter W that specifies an arbitrary weight function W(x), as well as arbitrary endpoints (a,b).

This is useful for precomputing quadrature weights for integrating functions with known singularities and/or infinite intervals.

The FastGaussQuadrature package by @dlfivefifty computes Gaussian quadrature weights much more quickly for all of the "standard" weight functions, but it is also nice to have a Julia routine that works for arbitrary weights.

(e.g. in my field it is common to integrate functions weighted by the solar spectrum, which is an extremely messy weight function derived from tabulated data, and this lets me obtain a quadrature rule for that.)

@dlfivefifty
Copy link
Member

For the record FastGaussQuadrature.jl was developed by @ajt60gaibb

@stevengj stevengj merged commit 7d8ec13 into master Dec 5, 2019
@stevengj stevengj deleted the weightedgauss branch December 5, 2019 18:48
@stevengj
Copy link
Member Author

stevengj commented Dec 7, 2019

Note that the current "textbook" method in this PR is numerically unstable (for an arbitrary quadrature order) because it works with polynomials in coefficient form. It is nice to have as a baseline and was fun to implement, but I should fix it in order to make this more practical; otherwise you can't go much beyond order 10.

This weekend, I'll rewrite it to perform the Lanczos procedure on Chebyshev polynomials instead (represented in coefficient form), which I think will fix the instability. This will also eliminate the dependency on Polynomials.jl — I only need to write a couple of 5-line functions to evaluate Cheybshev polynomials (by a Clenshaw recurrence) and multiply them by x (for which there is also a simple recurrence), and I should also be able to make it faster.

@dlfivefifty
Copy link
Member

For reference there's an implementation in ApproxFun.jl

https://github.com/JuliaApproximation/ApproxFun.jl/blob/master/src/Extras/lanczos.jl

Not particularly fast though.

@stevengj
Copy link
Member Author

stevengj commented Dec 7, 2019

Nice, thanks, that will be useful for comparison! For this kind of application speed is not too important — you would only use it to precompute weights if you have either an extremely expensive integrand or are computing zillions of integrals.

(e.g. in our applications our integrand involves solving Maxwell's equations)

@stevengj
Copy link
Member Author

stevengj commented Dec 13, 2019

@dlfivefifty, do you have a reference for the simple idea of applying the Golub–Welsch 3-term recurrence (Lanczos) algorithm in the basis of Chebyshev polynomials ala ApproxFun (which is what QuadGK now does as well)?

The methods I'm finding described in the literature seem to be way more complicated: https://www.sciencedirect.com/science/article/pii/S0022407300000261, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2674649/, https://link.springer.com/article/10.1007/BF02437506, https://dl.acm.org/citation.cfm?id=174605 (ORTHPOL). The ORTHPOL paper describes one algorithm that seems closely related, where you first calculate the "moments" of the weight function with Chebyshev polynomials, and then use these moments to build up the Jacobi matrix, but I don't see any advantage of this approach vs. just doing Golub–Welsch directly and representing the polynomials as Chebyshev series. Am I missing something?

@dlfivefifty
Copy link
Member

I haven’t actually looked into it: I wrote that lanczos code because Alan asked if it was possible. I suspect you are right about the trade-offs.

@dlfivefifty
Copy link
Member

PS one nice thing about the ApproxFun approach is it automatically works with other types like BigFloat. For standard quadrature based lanczos one would need to calculate the quadrature rule using BigFloat, which normally involves computing eigenvalues of a SymTridiagonal. This is doable for BigFloat using GenericLinearAlgebra.jl but I’m not sure it’s as general as what ApproxFun’s lanczos can do.

@stevengj
Copy link
Member Author

one nice thing about the ApproxFun approach is it automatically works with other types like BigFloat

I don't follow you — the Lanczos procedure gives you the entries of the tridiagonal Jacobi matrix, but once you have that matrix what do you do? If you want the the points and weights of the resulting weighted Gaussian quadrature rule you still need to solve the eigenproblem, no?

This is doable for BigFloat using GenericLinearAlgebra.jl but I’m not sure it’s as general as what ApproxFun’s lanczos can do.

The QuadGK package already includes routines for solving the symmetric tridiagonal eigenvalue problem with BigFloats, and already can do the Lanczos quadrature to arbitrary precision (the quadgk function supports arbitrary precision).

(However, currently its bigfloat symmetric tridiagonal eigensolver routine is specialized for tridiagonal matrices with zero diagonals, as arise for symmetric weights, so I'll need to go back and generalize them slightly before we can use it to construct weighted Gaussian quadrature rules.)

@stevengj
Copy link
Member Author

@alanedelman, see also the references I linked above. You mentioned that you had worked on Gaussian quadrature for arbitrary weight functions, with the Lanczos procedure executed numerically. The literature I'm finding on this either used numerically unstable methods or implemented much more complicated procedures — I'm not yet finding any publications that simply use the textbook Golub–Welsch algorithm but with polynomials represented in the Chebyshev basis or similar. If you have other work I should be aware of, please let me know.

@dlfivefifty
Copy link
Member

Sorry I meant just calculating the recurrence coefficients. You’d still have the same problem of needing a general eigenvalue solver. Though according to @ajt60gaibb it’s faster to just do rootfinding using the recurrence to evaluate, provided one has an accurate enough initial guess

@stevengj
Copy link
Member Author

stevengj commented Dec 14, 2019

Though according to @ajt60gaibb it’s faster to just do rootfinding using the recurrence to evaluate, provided one has an accurate enough initial guess

That's precisely what QuadGK already does, dating back to my initial implementation in 2013 — it obtains the initial guess from the double-precision eigenvalues, and then performs a BigFloat Newton iteration to polish the roots. This works because quadrature roots (for any order where you could conceivably use BigFloat arithmetic) are always well separated enough for double precision to give good initial guesses.

@dlfivefifty
Copy link
Member

Cool! Probably could work for interval arithmetic too using interval Newton method

@ajt60gaibb
Copy link

Yep, for general weight Gauss quadrature, the fastest thing I've seen is to get some initial guesses (somehow) and then do Newton using three-term recurrence. Typically, you need to get n in the hundreds of thousands for nodes in double precision to have a danger of coalescing. If you do run into trouble then you can do everything in theta where x = cos(theta). (This change-of-variables often removes the clustering near \pm 1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants