Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate learning of inertia; c.o.m. and mass #17

Open
mshalm opened this issue Mar 29, 2023 · 2 comments
Open

Separate learning of inertia; c.o.m. and mass #17

mshalm opened this issue Mar 29, 2023 · 2 comments

Comments

@mshalm
Copy link
Collaborator

mshalm commented Mar 29, 2023

Discussion suggests that the log-Cholesky formulation of Rucker and Wensing can be adapted to a smooth parameterization that has decoupled parameters for the inertia tensor; c.o.m.; and mass, with similar guarantees for physical feasibility.

This should be implemented as the ability to learn any mixture of any of these components for each body, and fix the others.

@ebianchi
Copy link
Collaborator

I have a version of learning different subsets of inertial parameters in my dair_pll fork. It's a bit hacky because it doesn't actually replace the 10 theta parameters per rigid body with fewer; instead it converts the 10 theta parameters to pi format, overwrites (thus zeroing the loss gradient w.r.t.) any pi-format numbers we don't want to learn, then converts back to theta format. This does not decouple inertia, CoM, and mass parameters, though it achieves the ability to fix some parameters while learning others, and might be a useful comparison for a different approach in PLL's future.

This is implemented in my fork's LagrangianTerms class, where calls to the attribute LagrangianTerms.inertial_parameters is replaced with calls to a new method LagrangianTerms.inertial_params() (implemented here) which does the following:

  1. Start with LagrangianTerms.inertial_parameters.
  2. Convert to pi format.
  3. Overwrite any of the pi-format parameters we don't want to learn (e.g. CoM, and/or inertial parameters, and/or masses).
  4. Convert back to theta format, and return the result.

@mshalm
Copy link
Collaborator Author

mshalm commented Apr 8, 2023

That sounds like the right architecture. I think an implementation that combines the two goals here would be a few independent Parameter/Tensor member 3-tuples of LagrangianTerms / ContactTerms, e.g.

learnable_inertia_parameters / static_inertia_parameters of shape (n_learnable_I, 6) / (n_bodies - n_learnable_I, 6), with indexing managed by a third learnable_inertia_body_indices of shape (n_learnable_I,)

and then LagrangianTerms.inertial_params() and a similar modification to ContactTerms.get_friction_coefficients() do some index management anaglous to what you describe above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants