Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

notes from 2022-02-01 meeting with Khachik #70

Open
1 of 6 tasks
ghost opened this issue Feb 8, 2022 · 0 comments
Open
1 of 6 tasks

notes from 2022-02-01 meeting with Khachik #70

ghost opened this issue Feb 8, 2022 · 0 comments
Labels
documentation Improvements or additions to documentation

Comments

@ghost
Copy link

ghost commented Feb 8, 2022

notes

sparse quadrature

can different sampling methods per-variable improve the accuracy / efficiency of the surrogate?

  • sparse quadrature
    • build with projection instead of regression
    • “quadrature usually doesn’t work for physical models because of noise or whatever”

here's an example of a sparse sampling method that the authors call "Smolyak sparse grid from the extrema of Chebyshev polynomials (Clenshaw-Curtis quadrature)"
image
from https://www.sciencedirect.com/science/article/pii/S0098135419310026

methods and constraints

method regularization penalty parameter?
Bayesian Compressed Sensing L1 ✔️
Lasso-Type Regularization L1 ✔️
Bayesian Ridge L2 ✔️
regularization constraint
L1 sparsity
L2 smoothness
  • UQTk uses “Bayesian compressed sensing”
  • the “penalty parameter” needs to be played around with to get good results

polynomial order / count determination

the number of expanded polynomials M can be found by

M + 1 = (N + P)! / (N! * P!)

where N is the number of variables and P is the highest order.

This is equation 5 from https://www.sciencedirect.com/science/article/pii/S0098135419310026:
image

Therefore, a 3rd order polynomial with 4 variables expands to 34 polynomials (plus a zeroth order polynomial)

as a rule of thumb, with an L2 constraint the number of data points D should be approx

D ~= 10 * M

However, Lasso-Type Regularization and Bayesian Compressed Sensing can both fit against a large fraction of polynomials vs data points less than 10 * M

mesh perturbation

  • mesh size is a “hyperparameter”
    • numerical and not physical
    • Khachik is unsure if we can use this methodology to get sensitivity

next steps

@ghost ghost changed the title sparse quadrature sampling notes from 2022-02-01 meeting with Khachik Feb 8, 2022
@ghost ghost added the documentation Improvements or additions to documentation label Feb 8, 2022
@ghost ghost self-assigned this Feb 14, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

0 participants