Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose alpha_y, beta_y parameters #59

Merged
merged 14 commits into from
Sep 12, 2024

Conversation

cmower
Copy link
Contributor

@cmower cmower commented Sep 10, 2024

Exposes alpha_y and beta_y to all DMP constructors and enables vectorization while maintaining backward compatibility with scalar values.

Addresses #57.

Are there unit tests?

Unit tests are updated to handle new features. Pytest going green.

Does it have docstrings?

Docstrings have been updated where appropriate.

Is it included in the API documentation?

NA

Run flake8 and pylint

Done

Should it be part of the readme?

No

Should it be included in any example script?

Not necessarily.

@@ -75,7 +75,7 @@ def test_compare_python_cython():
current_y=np.array([1.0, 0.0, 0.0, 0.0]), current_yd=np.zeros([3]),
goal_y=np.array([1.0, 0.0, 0.0, 0.0]), goal_yd=np.zeros([3]), goal_ydd=np.zeros([3]),
start_y=np.array([1.0, 0.0, 0.0, 0.0]), start_yd=np.zeros([3]), start_ydd=np.zeros([3]),
goal_t=2.0, start_t=0.0, alpha_y=25.0, beta_y=6.25,
goal_t=2.0, start_t=0.0, alpha_y=25.0*np.ones(3), beta_y=6.25*np.ones(3),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8 suggests to add spaces around operators

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 0584376

@@ -380,6 +380,12 @@ class DMP(WeightParametersMixin, DMPBase):
is changed and the trajectory is scaled by interpolating between
the old and new scaling of the trajectory.

alpha_y : float, list with length n_dims, or array with shape (n_dims,), optional (default: 25.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would just write array-like, shape (n_dims,)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 7e5d15e

@@ -416,8 +422,31 @@ def __init__(self, n_dims, execution_time=1.0, dt=0.01,

self._init_forcing_term()

self.alpha_y = 25.0
self.beta_y = self.alpha_y / 4.0
if isinstance(alpha_y, float):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • This whole thing looks a bit fragile. What if the input is a jax.numpy.ndarray or a tuple? What if the input is an integer (e.g., 25)? I would like to avoid manual isinstance checks and type calls.
    • isinstance(..., float) should be replaced by checking for a scalar numerical value and cast it to a 1d numpy array of dtype float
    • isinstance(..., (np.ndarray, list)) should be replaced by checking for an iterable or array-like interface and cast it to a 1d numpy array of dtype float
  • I would also suggest to extract this as a more general function that will be applied to alpha_y and beta_y.

sklearn has a similar function: sklearn.utils.check_array. Maybe we can use this for inspiration (although it is certainly way to complex for our use case).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see 9103981, and let me know if this works for you?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest to extract it as a function and give it a more generic name. It's not specific to DMPs.

Copy link
Contributor Author

@cmower cmower Sep 11, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In 936a9d4, I changed the name to ensure_1d_array and moved to movement_primitives/utils.py

@@ -35,6 +35,12 @@ class DMPWithFinalVelocity(WeightParametersMixin, DMPBase):
Gain for proportional controller of DMP tracking error.
The domain is [0, execution_time**2/dt].

alpha_y : float, list with length n_dims, or array with shape (n_dims,), optional (default: 25.0)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same applies here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 7e5d15e

@@ -63,8 +70,23 @@ def __init__(self, n_dims, execution_time=1.0, dt=0.01,

self._init_forcing_term()

self.alpha_y = 25.0
self.beta_y = self.alpha_y / 4.0
if isinstance(alpha_y, float):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is duplicate code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see 9103981, and let me know if this works for you?

@@ -239,8 +245,29 @@ def __init__(self, execution_time=1.0, dt=0.01, n_weights_per_dim=10,

self._init_forcing_term()

self.alpha_y = 25.0
self.beta_y = self.alpha_y / 4.0
if isinstance(alpha_y, float):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see 9103981, and let me know if this works for you?

@@ -34,8 +34,26 @@ def __init__(self, n_dims, execution_time, dt=0.01, n_viapoints=10,

alpha_z = canonical_system_alpha(0.01, self.execution_time, 0.0)

self.alpha_y = 25.0
self.beta_y = self.alpha_y / 4.0
if isinstance(alpha_y, float):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

duplicate code

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please see 9103981, and let me know if this works for you?

@AlexanderFabisch
Copy link
Member

Thanks a lot for the contribution. I commented on some parts.

I think an example script would actually be interesting! What is the use case that you are aiming for? Is there a simple way to demonstrate that different values for alpha and beta make sense?

Btw are you aiming for combinations of alpha and beta that are not critically damped? If not, then we can also set beta based on the value of alpha or vice versa.

@cmower
Copy link
Contributor Author

cmower commented Sep 11, 2024

Thanks a lot for the contribution. I commented on some parts.

Hi @AlexanderFabisch, no problem, thanks for the comments. I've addressed all the above, please let me know if you would like any further changes?

I think an example script would actually be interesting! What is the use case that you are aiming for? Is there a simple way to demonstrate that different values for alpha and beta make sense?

Sure, I will make a small example. Currently, I am just learning some simple demonstrations via kinaesthetic teaching on a fixed-base robot arm - later perhaps something more interesting.

Btw are you aiming for combinations of alpha and beta that are not critically damped? If not, then we can also set beta based on the value of alpha or vice versa.

If I understand correct, I think this would add more complexity to the code/documentation than needed - in my opinion, best to leave this choice to the user and set what they want. Please let me know your thoughts?

@cmower
Copy link
Contributor Author

cmower commented Sep 11, 2024

Hi @AlexanderFabisch, added an example. Please let me know if you would like any other changes? Otherwise, I think this is good to merge?


Demonstrates how modifying DMP gains (alpha_y, beta_y) affects
the resulting trajectory reproduction.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

empty lines should be removed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in f490348

import matplotlib.pyplot as plt
import numpy as np
from movement_primitives.dmp import DMP

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PEP8: add one more blank line

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 26ca0ba

@AlexanderFabisch
Copy link
Member

AlexanderFabisch commented Sep 11, 2024

If I understand correct, I think this would add more complexity to the code/documentation than needed - in my opinion, best to leave this choice to the user and set what they want. Please let me know your thoughts?

I'm fine with this.

Apart from my last 3 comments it looks good to me.

import numpy as np


def ensure_1d_array(value, dim, label):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • For consistency, could you rename dim to n_dims?
  • ... and label reminds me of classification, maybe something like var_name?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@AlexanderFabisch AlexanderFabisch merged commit cd2435f into dfki-ric:main Sep 12, 2024
@AlexanderFabisch
Copy link
Member

I modified ensure_1d_array (unit tests, documentation, bugfixe) in d187c15 and merged this PR.

Thanks for the contribution!

@AlexanderFabisch AlexanderFabisch mentioned this pull request Sep 12, 2024
@cmower cmower deleted the feature/expose-gains branch September 19, 2024 12:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants