Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kalman / covariance propogation solver #12

Closed
BradyPlanden opened this issue Jul 18, 2023 · 4 comments
Closed

Kalman / covariance propogation solver #12

BradyPlanden opened this issue Jul 18, 2023 · 4 comments
Assignees

Comments

@BradyPlanden
Copy link
Member

BradyPlanden commented Jul 18, 2023

This issue is to decide the implementation language for the solver required for #3.

Options discussed:
Pure Python
JAX
PyBaMM's IDAKLU (C++)

Currently, JAX is the method being considered as it provides compiled performance as well as code modularity between CPU/TPU/GPU.

@davidhowey
Copy link
Member

Just to drop in, as an aside, that I do quite like the idea of the "halfway house" observer we discussed a few days ago, i.e., fixed feedback gain parameter K that is learnt as part of the optimisation. Any observer is better than none, and this might mean we can use existing solvers.

@martinjrobins
Copy link
Contributor

is this still in scope for pybop? I notice that the BaseModel class currently handles the timestepping, i.e. timestepping, or the way time is discretised, is considered part of the model, and data only comes into the problem. But the output of a KF is (for parameter estimation anyway) the likelihood, which is a cost function, and the timestepping is also handled by the KF (or any observer), and data comes into the KF. So its unclear where a KF would fit into the current design.

It would be great if this would be included still!

@BradyPlanden
Copy link
Member Author

BradyPlanden commented Nov 29, 2023

Hi Martin,

Yes, I think everyone is still keen for this to be included; but I agree that it's not clear how it fits into the current design. I believe the fixed-gain observer could be completed by inserting a fixed gain optimisation parameter into the RHS (although for which state variable is the question) with requiring a change to the design.

For a true KF implementation (and without thinking too much about it), we might be able to formulate the KF as a cost function with methods to change the time step by changing the problem variables/methods. Is this something that you are interested in looking at?

@martinjrobins
Copy link
Contributor

sure, I can look at this. I'm not exactly sure how to implement a KF for a general pybamm model, but I'll look into it....

@martinjrobins martinjrobins self-assigned this Nov 29, 2023
martinjrobins added a commit that referenced this issue Dec 18, 2023
martinjrobins added a commit that referenced this issue Dec 18, 2023
martinjrobins added a commit that referenced this issue Dec 19, 2023
martinjrobins added a commit that referenced this issue Dec 20, 2023
martinjrobins added a commit that referenced this issue Dec 20, 2023
martinjrobins added a commit that referenced this issue Dec 20, 2023
martinjrobins added a commit that referenced this issue Dec 20, 2023
martinjrobins added a commit that referenced this issue Dec 20, 2023
martinjrobins added a commit that referenced this issue Dec 20, 2023
martinjrobins added a commit that referenced this issue Jan 8, 2024
martinjrobins added a commit that referenced this issue Jan 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants