Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gaussian quadrature #141

Merged
merged 76 commits into from
Apr 19, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
76 commits
Select commit Hold shift + click to select a range
317f2ab
basic version of gauss-legendre
elastufka Apr 7, 2022
ba7d1e7
fstrings for my sanity
elastufka Apr 7, 2022
4bc447b
fstrings for my sanity
elastufka Apr 7, 2022
345da2c
weights and points multidimensional
elastufka Apr 7, 2022
170ae2a
transform xi,wi correctly
elastufka Apr 8, 2022
1d27e08
basic version of gauss-legendre
elastufka Apr 7, 2022
4547646
fstrings for my sanity
elastufka Apr 7, 2022
2fa40b0
fstrings for my sanity
elastufka Apr 7, 2022
399234c
weights and points multidimensional
elastufka Apr 7, 2022
0c8aef3
transform xi,wi correctly
elastufka Apr 8, 2022
7bf8fa0
let function to integrate accept args, c.f. scipy.nquad
elastufka Apr 8, 2022
d630a4f
Merge branch 'main' of https://github.com/elastufka/torchquad into ga…
elastufka Apr 8, 2022
83da697
let function to integrate accept args, c.f. scipy.nquad
elastufka Apr 8, 2022
b5cca10
any edits
elastufka Apr 13, 2022
fc37852
add numpy import
elastufka Apr 13, 2022
2d79c48
autoray
elastufka Apr 13, 2022
444aaef
add Gaussian quadrature methods
elastufka Apr 21, 2022
9e196d4
fix import
elastufka Apr 22, 2022
c08ddfa
change anp.inf to numpy.inf
elastufka Apr 25, 2022
25b0747
fix interval transformation and clean up
elastufka Apr 26, 2022
adae7d7
make sure tensors are on same device
elastufka Apr 26, 2022
24357bc
make sure tensors are on same devicepart 2
elastufka Apr 26, 2022
5c458a4
make sure tensors are on same devicepart 3
elastufka Apr 26, 2022
58d7de7
make sure tensors are on same devicepart 4
elastufka Apr 26, 2022
684a1b5
make sure tensors are on same devicepart 5
elastufka Apr 26, 2022
3ad979c
add special import
elastufka May 12, 2022
9a94a55
Merge remote-tracking branch 'esa/develop' into gaussian-quadrature
elastufka May 16, 2022
6d42ab8
add tests to /tests
elastufka May 16, 2022
3fd4a00
run autopep8, add docstring
elastufka May 30, 2022
50f3785
Merge branch 'multi_dim_integrand' into gaussian_quadrature
ilan-gold Jan 13, 2023
868c8f2
(feat): cache for roots.
ilan-gold Jan 13, 2023
5073116
(feat): refactor out grid integration procedure
ilan-gold Jan 13, 2023
5868263
(feat): gaussian integration refactored, some tests passing
ilan-gold Jan 13, 2023
2d4adf4
(fix): scaling constant
ilan-gold Jan 13, 2023
f0b0859
(chore): higher dim integrals testing
ilan-gold Jan 13, 2023
44f063b
(feat): weights correct for multi-dim integrands.
ilan-gold Jan 14, 2023
96f2af3
(fix): correct number of argument.
ilan-gold Jan 14, 2023
0c21d1e
(fix): remove non-legendre tests.
ilan-gold Jan 14, 2023
1bf58fb
Merge branch 'multi_dim_integrand' into gaussian_quadrature
ilan-gold Jan 14, 2023
1d2a53c
(fix): import GaussLegendre
ilan-gold Jan 14, 2023
c6a6a85
(fix): ensure grid and weights are correct type
ilan-gold Jan 14, 2023
b2ac2f3
(style): docstrings.
ilan-gold Jan 14, 2023
9a33275
(fix): default `grid_func`
ilan-gold Jan 14, 2023
f619485
(fix): `_cached_poi...` returns tuple, not ndarray
ilan-gold Jan 14, 2023
69c5354
(fix): propagate `backend` correctly.
ilan-gold Jan 14, 2023
5ab18ce
Merge branch 'develop' into gaussian-quadrature
ilan-gold Jan 16, 2023
738d18f
(chore): export base types
ilan-gold Jan 16, 2023
ce20c33
(feat): add jit for gausssian
ilan-gold Jan 17, 2023
40c5e58
(feat): backward diff
ilan-gold Jan 19, 2023
86c9c20
(fix): env issue
ilan-gold Jan 19, 2023
76c4a24
Fixed tests badge
gomezzz Mar 1, 2023
76fc60a
Merge branch 'main' into gaussian_quadrature
ilan-gold Mar 3, 2023
5e2668d
Merge remote-tracking branch 'trunk/develop' into gaussian_quadrature
ilan-gold Mar 3, 2023
215fc5f
(chore): cleanup
ilan-gold Mar 3, 2023
edfd308
(fix): `intergal` -> `integral`
ilan-gold Mar 3, 2023
e3827fa
(chore): add tutorial
ilan-gold Mar 3, 2023
3ff0d2b
(fix): change to `argnums` to work around decorator
ilan-gold Mar 6, 2023
09ecbd2
Merge branch 'gaussian-quadrature' of https://github.com/elastufka/to…
ilan-gold Mar 6, 2023
644d420
Merge branch 'develop' into gaussian_quadrature
ilan-gold Mar 8, 2023
0b9def6
(fix): add fix from other PR
ilan-gold Mar 8, 2023
c6f06de
,erge
ilan-gold Mar 8, 2023
88bd469
(feat): add (broken) tests for gauss jit
ilan-gold Mar 8, 2023
308a990
(chore): remove unused import
ilan-gold Mar 8, 2023
0b1a12a
(fix): use `item` for `N` when `jit` with `jax`
ilan-gold Mar 16, 2023
d732ca4
(fix): `domain` for jit gauss `calculate_result`
ilan-gold Mar 16, 2023
93ba243
(chore): `black`
ilan-gold Mar 16, 2023
c31ccae
(chore): erroneous diff
ilan-gold Mar 17, 2023
24b94b8
(chore): remove erroneous print
ilan-gold Mar 20, 2023
d44e42d
(fix): correct comment
ilan-gold Mar 20, 2023
b8c18b9
(fix): clean up gaussian tests
ilan-gold Mar 20, 2023
1582735
(chore): add comments.
ilan-gold Mar 20, 2023
491cce3
(chore): formatting
ilan-gold Mar 20, 2023
b3d442e
(fix): error of 1D integral
ilan-gold Mar 20, 2023
270de4d
Merge remote-tracking branch 'trunk/fixing-coverage-workflow' into ga…
ilan-gold Mar 27, 2023
7383d5d
(fix): increase bounds.
ilan-gold Apr 18, 2023
31441c6
Merge branch 'develop' into gaussian_quadrature
ilan-gold Apr 19, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions docs/source/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -756,4 +756,40 @@ Now let's see how to do this a bit more simply, and in a way that provides signf

torch.all(torch.isclose(result_vectorized, result)) # True!

Custom Integrators
------------------

It is of course possible to extend our provided Integrators, perhaps for a special class of functions or for a new algorithm.

.. code:: ipython3

class GaussHermite(Gaussian):
"""Gauss Hermite quadrature rule in torch, for integrals of the form :math:`\\int_{-\\infty}^{+\\infty} e^{-x^{2}} f(x) dx`. It will correctly integrate
polynomials of degree :math:`2n - 1` or less over the interval
:math:`[-\\infty, \\infty]` with weight function :math:`f(x) = e^{-x^2}`. See https://en.wikipedia.org/wiki/Gauss%E2%80%93Hermite_quadrature
"""

def __init__(self):
super().__init__()
self.name = "Gauss-Hermite"
self.root_fn = scipy.special.roots_hermite
self.default_integration_domain = [[-1 * numpy.inf, numpy.inf]]
self.wrapper_func = None

@staticmethod
def _apply_composite_rule(cur_dim_areas, dim, hs, domain):
"""Apply "composite" rule for gaussian integrals
cur_dim_areas will contain the areas per dimension
"""
# We collapse dimension by dimension
for _ in range(dim):
cur_dim_areas = anp.sum(cur_dim_areas, axis=len(cur_dim_areas.shape) - 1)
return cur_dim_areas

gh=torchquad.GaussHermite()
integral=gh.integrate(lambda x: 1-x,dim=1,N=200) #integral from -inf to inf of np.exp(-(x**2))*(1-x)
# Computed integral was 1.7724538509055168.
# analytic result = sqrt(pi)



6 changes: 6 additions & 0 deletions torchquad/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@
from .integration.simpson import Simpson
from .integration.boole import Boole
from .integration.vegas import VEGAS
from .integration.gaussian import GaussLegendre
from .integration.grid_integrator import GridIntegrator
from .integration.base_integrator import BaseIntegrator

from .integration.rng import RNG

Expand All @@ -22,12 +25,15 @@
from .utils.deployment_test import _deployment_test

__all__ = [
"GridIntegrator",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to expose these to the users? 🤔

Or is this with the idea of them also defining their own integrators based on them? Then it could be nice but we might want to either give an example or at least mention them in the docs 🤔 (if you prefer we can create a new issue for that)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do want to expose these. I use them for a custom integrator. I can add something in the docs.

"BaseIntegrator",
"IntegrationGrid",
"MonteCarlo",
"Trapezoid",
"Simpson",
"Boole",
"VEGAS",
"GaussLegendre",
"RNG",
"plot_convergence",
"plot_runtime",
Expand Down
33 changes: 27 additions & 6 deletions torchquad/integration/base_integrator.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,30 +29,41 @@ def integrate(self):
NotImplementedError("This is an abstract base class. Should not be called.")
)

def _eval(self, points):
def _eval(self, points, weights=None, args=None):
"""Call evaluate_integrand to evaluate self._fn function at the passed points and update self._nr_of_evals

Args:
points (backend tensor): Integration points
weights (backend tensor, optional): Integration weights. Defaults to None.
args (list or tuple, optional): Any arguments required by the function. Defaults to None.
"""
result, num_points = self.evaluate_integrand(self._fn, points)
result, num_points = self.evaluate_integrand(
self._fn, points, weights=weights, args=args
)
self._nr_of_fevals += num_points
return result

@staticmethod
def evaluate_integrand(fn, points):
def evaluate_integrand(fn, points, weights=None, args=None):
"""Evaluate the integrand function at the passed points

Args:
fn (function): Integrand function
points (backend tensor): Integration points
weights (backend tensor, optional): Integration weights. Defaults to None.
args (list or tuple, optional): Any arguments required by the function. Defaults to None.

Returns:
backend tensor: Integrand function output
int: Number of evaluated points
"""
num_points = points.shape[0]
result = fn(points)

if args is None:
args = ()

result = fn(points, *args)

if infer_backend(result) != infer_backend(points):
warnings.warn(
"The passed function's return value has a different numerical backend than the passed points. Will try to convert. Note that this may be slow as it results in memory transfers between CPU and GPU, if torchquad uses the GPU."
Expand All @@ -67,17 +78,27 @@ def evaluate_integrand(fn, points):
f"where first dimension matches length of passed elements. "
)

if weights is not None:
if (
len(result.shape) > 1
): # if the the integrand is multi-dimensional, we need to reshape/repeat weights so they can be broadcast in the *=
integrand_shape = anp.array(
result.shape[1:], like=infer_backend(points)
)
weights = anp.repeat(
anp.expand_dims(weights, axis=1), anp.prod(integrand_shape)
).reshape((weights.shape[0], *(integrand_shape)))
result *= weights

return result, num_points

@staticmethod
def _check_inputs(dim=None, N=None, integration_domain=None):
"""Used to check input validity

Args:
dim (int, optional): Dimensionality of function to integrate. Defaults to None.
N (int, optional): Total number of integration points. Defaults to None.
integration_domain (list or backend tensor, optional): Integration domain, e.g. [[0,1],[1,2]]. Defaults to None.

Raises:
ValueError: if inputs are not compatible with each other.
"""
Expand Down
2 changes: 1 addition & 1 deletion torchquad/integration/boole.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ def integrate(self, fn, dim, N=None, integration_domain=None, backend=None):
return super().integrate(fn, dim, N, integration_domain, backend)

@staticmethod
def _apply_composite_rule(cur_dim_areas, dim, hs):
def _apply_composite_rule(cur_dim_areas, dim, hs, domain):
"""Apply composite Boole quadrature.
cur_dim_areas will contain the areas per dimension
"""
Expand Down
152 changes: 152 additions & 0 deletions torchquad/integration/gaussian.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
import numpy
from autoray import numpy as anp
from .grid_integrator import GridIntegrator


class Gaussian(GridIntegrator):
"""Gaussian quadrature methods inherit from this. Default behaviour is Gauss-Legendre quadrature on [-1,1]."""

def __init__(self):
super().__init__()
self.name = "Gauss-Legendre"
self.root_fn = numpy.polynomial.legendre.leggauss
self.root_args = ()
self.default_integration_domain = [[-1, 1]]
self.transform_interval = True
self._cache = {}

def integrate(self, fn, dim, N=8, integration_domain=None, backend=None):
"""Integrates the passed function on the passed domain using Simpson's rule.

Args:
fn (func): The function to integrate over.
dim (int): Dimensionality of the integration domain.
N (int, optional): Total number of sample points to use for the integration. Should be odd. Defaults to 3 points per dimension if None is given.
integration_domain (list or backend tensor, optional): Integration domain, e.g. [[-1,1],[0,1]]. Defaults to [-1,1]^dim. It also determines the numerical backend if possible.
backend (string, optional): Numerical backend. This argument is ignored if the backend can be inferred from integration_domain. Defaults to the backend from the latest call to set_up_backend or "torch" for backwards compatibility.

Returns:
backend-specific number: Integral value
"""
return super().integrate(fn, dim, N, integration_domain, backend)

def _weights(self, N, dim, backend, requires_grad=False):
"""return the weights, broadcast across the dimensions, generated from the polynomial of choice

Args:
N (int): number of nodes
dim (int): number of dimensions
backend (string): which backend array to return

Returns:
backend tensor: the weights
"""
weights = anp.array(self._cached_points_and_weights(N)[1], like=backend)
if backend == "torch":
weights.requires_grad = requires_grad
return anp.prod(
anp.array(
anp.stack(
list(anp.meshgrid(*([weights] * dim))), like=backend, dim=0
)
),
axis=0,
).ravel()
else:
return anp.prod(
anp.meshgrid(*([weights] * dim), like=backend), axis=0
).ravel()

def _roots(self, N, backend, requires_grad=False):
"""return the roots generated from the polynomial of choice

Args:
N (int): number of nodes
backend (string): which backend array to return

Returns:
backend tensor: the roots
"""
roots = anp.array(self._cached_points_and_weights(N)[0], like=backend)
if requires_grad:
roots.requires_grad = True
return roots

@property
def _grid_func(self):
ilan-gold marked this conversation as resolved.
Show resolved Hide resolved
"""
function for generating a grid to be integrated over i.e., the polynomial roots, resized to the domain.
"""

def f(a, b, N, requires_grad, backend=None):
return self._resize_roots(a, b, self._roots(N, backend, requires_grad))

return f

def _resize_roots(self, a, b, roots): # scale from [-1,1] to [a,b]
"""resize the roots based on domain of [a,b]

Args:
a (backend tensor): lower bound
b (backend tensor): upper bound
roots (backend tensor): polynomial nodes

Returns:
backend tensor: rescaled roots
"""
return roots
gomezzz marked this conversation as resolved.
Show resolved Hide resolved

# credit for the idea https://github.com/scipy/scipy/blob/dde50595862a4f9cede24b5d1c86935c30f1f88a/scipy/integrate/_quadrature.py#L72
def _cached_points_and_weights(self, N):
"""wrap the calls to get weights/roots in a cache

Args:
N (int): number of nodes to return
backend (string): which backend to use

Returns:
tuple: nodes and weights
"""
root_args = (N, *self.root_args)
if not isinstance(N, int):
if hasattr(N, "item"):
root_args = (N.item(), *self.root_args)
else:
raise NotImplementedError(
f"N {N} is not an int and lacks an `item` method"
)
if root_args in self._cache:
return self._cache[root_args]
self._cache[root_args] = self.root_fn(*root_args)
return self._cache[root_args]

@staticmethod
def _apply_composite_rule(cur_dim_areas, dim, hs, domain):
"""Apply "composite" rule for gaussian integrals

cur_dim_areas will contain the areas per dimension
"""
# We collapse dimension by dimension
for cur_dim in range(dim):
cur_dim_areas = (
0.5
* (domain[cur_dim][1] - domain[cur_dim][0])
* anp.sum(cur_dim_areas, axis=len(cur_dim_areas.shape) - 1)
)
return cur_dim_areas


class GaussLegendre(Gaussian):
"""Gauss Legendre quadrature rule in torch. See https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss%E2%80%93Legendre_quadrature.

Examples
--------
>>> gl=torchquad.GaussLegendre()
>>> integral = gl.integrate(lambda x:np.sin(x), dim=1, N=101, integration_domain=[[0,5]]) #integral from 0 to 5 of np.sin(x)
|TQ-INFO| Computed integral was 0.7163378000259399 #analytic result = 1-np.cos(5)"""

def __init__(self):
super().__init__()

def _resize_roots(self, a, b, roots): # scale from [-1,1] to [a,b]
return ((b - a) / 2) * roots + ((a + b) / 2)
Loading