-
Notifications
You must be signed in to change notification settings - Fork 37
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the two types of parametrization: constants & nets:
- I don't understand why we need to differentiate between the two types, given that
UnconstrainedMassValue
behaves similar to a torch.nn.Parameter. What would happen if we change all of the learnable params to modules, and thus we can omit the "module" field inlearnable_params[<param>]
? This way users also gain the freedom to parametrize masses as a single param, or inject structure/constraints into translation & rotation. - If we want to keep the two types, the "module" key still seems extraneous since the content can already be determined from the parameter key ("mass", "trans"...).
One big concern I have is whether we will want to support different parameterizations for each link. For example, users might want to learn only the mass for one link but the trans for another link, or users might want to parametrize masses of different links with different models. If we want to support this in the foreseeable future, then I think it's best to either support it now or build the API in a way that's extendable in that direction, else that's another major refactor down the road.
One idea I have is to have an interface for setting parameterizations of the robot model dynamically like such: robot_model.set_parameterization(link_name, property, parametrization, init_value)
. For modules, this will look like robot_model.set_parameterization("iiwa_link_1", "mass", UnconstrainedMassValue, None)
, and for params we can have robot_model.set_parameterization("iiwa_link_2", "trans", None, 0.1)
. We can even create macros that wrap around this API such as robot_model.set_parameterization_for_all_links("mass", UnconstrainedMassValue)
, or robot_model.set_parameterization_from_dict(cfg)
.
@@ -10,6 +10,9 @@ | |||
from torch.utils.data.dataset import Dataset | |||
|
|||
|
|||
torch.set_default_tensor_type(torch.FloatTensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we allowing users to set default tensor type for GPU-based experiments? If so we shouldn't be setting the default tensor type here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh I think this was a left over line, the code wasn't working on Mac anymore after the GPU PR, but then your new test PR fixed things (which I merged into my branch), and I forgot to remove my "fixes"
@@ -38,38 +38,40 @@ def __init__( | |||
is_using_positive_initial_guest=False, | |||
init_param=None, | |||
is_initializing_params=True, | |||
device='cpu' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For torch.nn.Modules
, it would be cleaner to call to()
outside the module after initialization to set the device for all parameters within the module and its submodules, instead of having to specify the device at every layer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had tried that initially but somehow it kept failing, I'll take a look again
I like the API change idea! should I do it myself, or do you want to do it @exhaustin ? |
Glad you liked it!! |
oh what I meant to say is that after merging with master (which had your latest PR) the device issue was fixed - just have to remove that line again :) so nothing to do here I'll make a first stab at the API change, then we can look at it together on Monday :) |
I was mainly talking about the device issue in torch.nn.Modules, but okay :) |
ah ok, yes somehow when I tried the .to(device) now, it just worked. |
I made small adjustments:
Considerations:
|
thanks!
hmmm thinking about this - I like the readability of this current API
hmmm, interesting thought. How would you make sure that people can switch the type of parametrization? Eg switch between different network implementations for each parameter, including their own? also one more thing: can we add tests for making sure the different parametrizations work? |
…xample such that the functionality get's tested
I modified your unlearnable functionality into freeze_link_param, and then also added unfreeze_link_param -> I think that is all we need also on Mac type(param_module) did not equal torch.nn.Module so I modified that a bit -> can we add circle ci testing for Mac? let me know what you think. |
I like the freeze/unfreeze API! Regarding tests for different parametrizations, I think we just need to check that they produce outputs in the correct format and are trainable when I think I can try to add tests for mac this week. Just need to figure out how to locate an image and also disable the gpu tests. |
draft refactor - please comment :)