Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Interpolant getting nans as input #4877

Open
brosaplanella opened this issue Feb 26, 2025 · 9 comments
Open

[Bug]: Interpolant getting nans as input #4877

brosaplanella opened this issue Feb 26, 2025 · 9 comments
Labels
bug Something isn't working

Comments

@brosaplanella
Copy link
Member

PyBaMM Version

develop

Python Version

3.11

Describe the bug

When defining the diffusivity as an interpolant, the stoichiometry passed is as nan. This means that the function gets evaluated at the midpoint of the interpolation range rather than at its actual value.

Steps to Reproduce

  • Set debug breakpoint in line 270 of interpolant.py (i.e. inside _function_evaluate)
  • Run minimal_example_of_lookup_tables.py
  • Check that evaluated_children contains nans in the second argument (stoichiometry). The first time I believe it is for shape evaluation only, so it's fine to have nans, but the subsequent evaluations are in the solver.

Relevant log output

@brosaplanella brosaplanella added the bug Something isn't working label Feb 26, 2025
@DrSOKane
Copy link
Contributor

Does this happen for any quantity defined as an interpolant, or only diffusivity?

@ahdezm
Copy link

ahdezm commented Feb 27, 2025

Running the same minimal_example_of_lookup_tables.py calculation but interpolating the "Negative electrode OCP [V]" or "Electrolyte conductivity [S.m-1]" parameters gives similar results, with the concentration or stoichiometry being NaNs.

I was the person who initially noticed this bug and don't know the PyBaMM internals very well, but it seems like some of the parameters which sto depends on are not numbers (i.e. "Negative particle concentration [mol.m-3]") for some reason. Setting sto = pybamm.full_like([sto], 0.5) in D_s_n before creating the pybamm.Interpolant makes the issue disappear.

I have also noticed that running minimal_interp3d_example.py with an additional print statement for evaluated_children in line 270 of interpolant.py also gives NaNs on every variable, which I'm not sure how to explain.

@ahdezm
Copy link

ahdezm commented Feb 27, 2025

I'm not entirely sure if this is expected, but the NaNs seem to be coming from the y StateVector, which always has that value when evaluated on an Interpolant. This can be confirmed by adding the following snippet before line 130 in functions.py (in the evaluate function) and running the example:

if type(self).__name__ == "Interpolant":
    print("all_nan?", np.isnan(y).all())

@martinjrobins
Copy link
Contributor

if _function_evaluate() is being called it is not part of the solve. Before the solve happens the expression tree gets converted to a casadi function, and this is the only thing that is called as part of a solve, not the python code. When an expression tree is evaluated to determine the shape of a symbol, nans are used for the state vector values, so this might be what you are seeing.

Do you have a test case where the output of a simulation is affected?

@ahdezm
Copy link

ahdezm commented Feb 27, 2025

I assume that at some point in this conversion the pybamm.Interpolant should be evaluated with non-NaN values in all children (see line 316 in interpolant.py) so that the resulting interpolated values can be used in the solve, but this doesn't seem to be happening.

If I understand correctly, any interpolated function which depends on the state (which will always be NaN during conversion) will always be evaluated at the same point (the midpoint), giving incorrect results.

@martinjrobins
Copy link
Contributor

no, the solver will never evaluate any python code, by that time the interpolant has been converted to a casadi function. The only time the code in pybamm.Interpolant will be called is before or after the solve (unless you are using the scipy solver with convert_to=python)

@ahdezm
Copy link

ahdezm commented Feb 28, 2025

Thank you for explaining, that makes sense - the conversion for interpolants seems to happen in convert_to_casadi.py L143.

I suspect that this issue can be closed now, but just for my peace of mind is there an easy way of extracting the interpolation inputs and outputs for diffusivity from the CasADi solution (I'm not sure how to match the input temperature and concentration with parts of the Solution object) or should I just run the calculation with the scipy solver?

@martinjrobins
Copy link
Contributor

just add the inputs and outputs as additional variables of the model (i.e. add to the model.variables attribute)

@ahdezm
Copy link

ahdezm commented Feb 28, 2025

Thank you again for your help, the values for linear interpolation seem to match.

When I use the cubic method instead, however, the resulting diffusivity doesn't match what I expect (it's smaller and negative). I might be doing something wrong but please find attached my testing script below.

pybamm_interpolant_test.txt

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants