Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

Fm refactor #18

Merged
merged 15 commits into from
May 31, 2021
Merged

Fm refactor #18

merged 15 commits into from
May 31, 2021

Conversation

fmeier
Copy link
Contributor

@fmeier fmeier commented May 5, 2021

draft refactor - please comment :)

@fmeier fmeier requested a review from exhaustin May 5, 2021 04:16
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label May 5, 2021
Copy link
Contributor

@exhaustin exhaustin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding the two types of parametrization: constants & nets:

  • I don't understand why we need to differentiate between the two types, given that UnconstrainedMassValue behaves similar to a torch.nn.Parameter. What would happen if we change all of the learnable params to modules, and thus we can omit the "module" field in learnable_params[<param>]? This way users also gain the freedom to parametrize masses as a single param, or inject structure/constraints into translation & rotation.
  • If we want to keep the two types, the "module" key still seems extraneous since the content can already be determined from the parameter key ("mass", "trans"...).

One big concern I have is whether we will want to support different parameterizations for each link. For example, users might want to learn only the mass for one link but the trans for another link, or users might want to parametrize masses of different links with different models. If we want to support this in the foreseeable future, then I think it's best to either support it now or build the API in a way that's extendable in that direction, else that's another major refactor down the road.

One idea I have is to have an interface for setting parameterizations of the robot model dynamically like such: robot_model.set_parameterization(link_name, property, parametrization, init_value). For modules, this will look like robot_model.set_parameterization("iiwa_link_1", "mass", UnconstrainedMassValue, None), and for params we can have robot_model.set_parameterization("iiwa_link_2", "trans", None, 0.1). We can even create macros that wrap around this API such as robot_model.set_parameterization_for_all_links("mass", UnconstrainedMassValue), or robot_model.set_parameterization_from_dict(cfg).

@@ -10,6 +10,9 @@
from torch.utils.data.dataset import Dataset


torch.set_default_tensor_type(torch.FloatTensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we allowing users to set default tensor type for GPU-based experiments? If so we shouldn't be setting the default tensor type here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh I think this was a left over line, the code wasn't working on Mac anymore after the GPU PR, but then your new test PR fixed things (which I merged into my branch), and I forgot to remove my "fixes"

@@ -38,38 +38,40 @@ def __init__(
is_using_positive_initial_guest=False,
init_param=None,
is_initializing_params=True,
device='cpu'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For torch.nn.Modules, it would be cleaner to call to() outside the module after initialization to set the device for all parameters within the module and its submodules, instead of having to specify the device at every layer.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had tried that initially but somehow it kept failing, I'll take a look again

@fmeier
Copy link
Contributor Author

fmeier commented May 7, 2021

I like the API change idea! should I do it myself, or do you want to do it @exhaustin ?

@exhaustin
Copy link
Contributor

I like the API change idea! should I do it myself, or do you want to do it @exhaustin ?

Glad you liked it!!
I can go through and fix the device issues for you since I just went through a related PR and thus the details are still in the L1 cache of my brain.
I am currently limited in terms of bandwidth, so I might not have time to do the API refactor before the end of next week. Maybe we can talk about it on Monday?

@fmeier
Copy link
Contributor Author

fmeier commented May 7, 2021

oh what I meant to say is that after merging with master (which had your latest PR) the device issue was fixed - just have to remove that line again :) so nothing to do here

I'll make a first stab at the API change, then we can look at it together on Monday :)

@exhaustin
Copy link
Contributor

oh what I meant to say is that after merging with master (which had your latest PR) the device issue was fixed - just have to remove that line again :) so nothing to do here

I'll make a first stab at the API change, then we can look at it together on Monday :)

I was mainly talking about the device issue in torch.nn.Modules, but okay :)

@fmeier
Copy link
Contributor Author

fmeier commented May 8, 2021

ah ok, yes somehow when I tried the .to(device) now, it just worked.
I pushed an initial version of the suggested API (only the simple version) - take a look, and let me know what you think. Have a good weekend :)

@exhaustin
Copy link
Contributor

I made small adjustments:

  • Add make_link_param_unlearnable, to freeze the learned results and set them as torch.nn.Parameters
  • Add assertions to check parameter names
  • Deduplicate the if statement in make_link_param_learnable

Considerations:

  • Do we want to combine make_link_param_learnable and make_link_param_unlearnable into something like set_link_param?
  • Do we want an easier way of setting learnability of modules? Something simple as torch.no_grad, but for individual links, so people can switch between different parts of the model to train easily?

@fmeier
Copy link
Contributor Author

fmeier commented May 11, 2021

thanks!

  • Do we want to combine make_link_param_learnable and make_link_param_unlearnable into something like set_link_param?

hmmm thinking about this - I like the readability of this current API

  • Do we want an easier way of setting learnability of modules? Something simple as torch.no_grad, but for individual links, so people can switch between different parts of the model to train easily?

hmmm, interesting thought. How would you make sure that people can switch the type of parametrization? Eg switch between different network implementations for each parameter, including their own?

also one more thing: can we add tests for making sure the different parametrizations work?

…xample such that the functionality get's tested
@fmeier
Copy link
Contributor Author

fmeier commented May 12, 2021

I modified your unlearnable functionality into freeze_link_param, and then also added unfreeze_link_param -> I think that is all we need

also on Mac type(param_module) did not equal torch.nn.Module so I modified that a bit -> can we add circle ci testing for Mac?

let me know what you think.

@exhaustin
Copy link
Contributor

I like the freeze/unfreeze API!

Regarding tests for different parametrizations, I think we just need to check that they produce outputs in the correct format and are trainable when forward is being called right? If that's ok I can try adding some unit tests.

I think I can try to add tests for mac this week. Just need to figure out how to locate an image and also disable the gpu tests.
In the meantime this PR looks good to merge!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants