-
-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add gauss function for general weight functions #41
Conversation
For the record FastGaussQuadrature.jl was developed by @ajt60gaibb |
Note that the current "textbook" method in this PR is numerically unstable (for an arbitrary quadrature order) because it works with polynomials in coefficient form. It is nice to have as a baseline and was fun to implement, but I should fix it in order to make this more practical; otherwise you can't go much beyond order 10. This weekend, I'll rewrite it to perform the Lanczos procedure on Chebyshev polynomials instead (represented in coefficient form), which I think will fix the instability. This will also eliminate the dependency on Polynomials.jl — I only need to write a couple of 5-line functions to evaluate Cheybshev polynomials (by a Clenshaw recurrence) and multiply them by |
For reference there's an implementation in ApproxFun.jl https://github.com/JuliaApproximation/ApproxFun.jl/blob/master/src/Extras/lanczos.jl Not particularly fast though. |
Nice, thanks, that will be useful for comparison! For this kind of application speed is not too important — you would only use it to precompute weights if you have either an extremely expensive integrand or are computing zillions of integrals. (e.g. in our applications our integrand involves solving Maxwell's equations) |
@dlfivefifty, do you have a reference for the simple idea of applying the Golub–Welsch 3-term recurrence (Lanczos) algorithm in the basis of Chebyshev polynomials ala ApproxFun (which is what QuadGK now does as well)? The methods I'm finding described in the literature seem to be way more complicated: https://www.sciencedirect.com/science/article/pii/S0022407300000261, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2674649/, https://link.springer.com/article/10.1007/BF02437506, https://dl.acm.org/citation.cfm?id=174605 (ORTHPOL). The ORTHPOL paper describes one algorithm that seems closely related, where you first calculate the "moments" of the weight function with Chebyshev polynomials, and then use these moments to build up the Jacobi matrix, but I don't see any advantage of this approach vs. just doing Golub–Welsch directly and representing the polynomials as Chebyshev series. Am I missing something? |
I haven’t actually looked into it: I wrote that lanczos code because Alan asked if it was possible. I suspect you are right about the trade-offs. |
PS one nice thing about the ApproxFun approach is it automatically works with other types like |
I don't follow you — the Lanczos procedure gives you the entries of the tridiagonal Jacobi matrix, but once you have that matrix what do you do? If you want the the points and weights of the resulting weighted Gaussian quadrature rule you still need to solve the eigenproblem, no?
The QuadGK package already includes routines for solving the symmetric tridiagonal eigenvalue problem with BigFloats, and already can do the Lanczos quadrature to arbitrary precision (the (However, currently its bigfloat symmetric tridiagonal eigensolver routine is specialized for tridiagonal matrices with zero diagonals, as arise for symmetric weights, so I'll need to go back and generalize them slightly before we can use it to construct weighted Gaussian quadrature rules.) |
@alanedelman, see also the references I linked above. You mentioned that you had worked on Gaussian quadrature for arbitrary weight functions, with the Lanczos procedure executed numerically. The literature I'm finding on this either used numerically unstable methods or implemented much more complicated procedures — I'm not yet finding any publications that simply use the textbook Golub–Welsch algorithm but with polynomials represented in the Chebyshev basis or similar. If you have other work I should be aware of, please let me know. |
Sorry I meant just calculating the recurrence coefficients. You’d still have the same problem of needing a general eigenvalue solver. Though according to @ajt60gaibb it’s faster to just do rootfinding using the recurrence to evaluate, provided one has an accurate enough initial guess |
That's precisely what QuadGK already does, dating back to my initial implementation in 2013 — it obtains the initial guess from the double-precision eigenvalues, and then performs a |
Cool! Probably could work for interval arithmetic too using interval Newton method |
Yep, for general weight Gauss quadrature, the fastest thing I've seen is to get some initial guesses (somehow) and then do Newton using three-term recurrence. Typically, you need to get |
This PR generalizes the
QuadGK.gauss
function to accept a function parameterW
that specifies an arbitrary weight functionW(x)
, as well as arbitrary endpoints(a,b)
.This is useful for precomputing quadrature weights for integrating functions with known singularities and/or infinite intervals.
The FastGaussQuadrature package by @dlfivefifty computes Gaussian quadrature weights much more quickly for all of the "standard" weight functions, but it is also nice to have a Julia routine that works for arbitrary weights.
(e.g. in my field it is common to integrate functions weighted by the solar spectrum, which is an extremely messy weight function derived from tabulated data, and this lets me obtain a quadrature rule for that.)