Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Human joint constraints #919

Closed
wants to merge 11 commits into from
Closed

Human joint constraints #919

wants to merge 11 commits into from

Conversation

jyf588
Copy link

@jyf588 jyf588 commented Oct 3, 2017

Hi all,

This PR uses Neural Nets to simulate realistic human range of motion:
https://youtu.be/wzkoE7wCbu0
http://arxiv.org/abs/1709.08685

Basically, there is a learned neural net function C(q); C(q) = 0.5 represents the boundary of valid human range of motion.
In every timestep, a forward pass and back-prop of the NN is executed to evaluate C(q) and dC/dq for current q. If q is invalid, a constraint force in the dC/dq direction is generated by LCP solver to push q back to valid region.

In practice, there is one NN for the arms (C_arm(q)), and one for the legs (C_leg(q)) to make training and evaluations easier.

Tested on MacOS 10.12; Not sure about Linux.

We need to figure out how to include the external library tiny-dnn: #918

  1. I still have tiny-dnn headers in DART headers; so currently I am unable to build my example outside dart source.
  2. I am not sure how to make C++14 optional.

Thanks in advance for helping with this.

Yifeng

@jslee02
Copy link
Member

jslee02 commented Oct 3, 2017

Quik suggestion: I think cereal package can be removed from this PR since they are available on macOS and Ubuntu Xenial. As this PR requires C++14, we should target this for Ubuntu Xenial. I believe it would make it a bit easier to review this PR.

@jyf588
Copy link
Author

jyf588 commented Oct 3, 2017

Hi JS, I have tried your suggestion and everything builds on my mac. Thus updated my PR. I am confused why the change set won't build on CI. Do you have any ideas?

@jyf588
Copy link
Author

jyf588 commented Oct 3, 2017

Specifically, I am looking at the macos CI, where I got the following errors:

/Users/travis/build/dartsim/dart/dart/constraint/HumanArmJointLimitConstraint.cpp:76:24: error: no matching member function for call to 'getDof'

/Users/travis/build/dartsim/dart/tiny_dnn/util/util.h:397:32: error: no matching function for call to 'zeros'

@mxgrey
Copy link
Member

mxgrey commented Oct 3, 2017

I haven't had time to do a thorough review yet, but I have some high-level thoughts from a quick glance.

How necessary is it to make two different constraint classes: one for the arm and one for the leg? Is it possible to create a limb constraint class which is more generic, and agnostic to whether it is being applied to an arm or a leg? DART tries to be generic whenever possible, since there are many types of robots or characters that we would like to be able to simulate. Similarly, do the constraints have to specifically be human?

One way to shift towards generality is to have a std::vector<Joint*> instead of individual Joint* members like mHipJoint, mKneeJoint, and mAnkleJoint. Similarly, a std::vector<BodyNode*> could be used for the BodyNode* members.

I'll do a more thorough review as soon as I get the chance. There's a lot of content to look over, so I'm not sure that I could offer an ETA right now.

@jyf588
Copy link
Author

jyf588 commented Oct 3, 2017

Hi Grey, thanks for the feedback. Yes, the constraints are specific to human, since the neural net functions are learned from data on human range of motion. (For instance, our shoulders have extremely large ROM.)

I think I could merge the 2 classes into 1 limb class with some effort. I didn't do that because in that case we would see some large IF blocks in the core constraint functions. As q_arm and q_leg have different DOFs (4 vs 6, we don't have data for hands), with different neural net shapes, they don't share logic during back-propagation, for example. Speaking of reading clarity, is that what we prefer?

Actually the fitted functions are pretty specific to a certain configuration: the first joint has to be 3DoF shoulder and the second has to be 1DoF elbow, and we still need certain box limits set on top of our functions. I guess since often time humans are what we are interested in, we might be a bit more "tolerant" in being specific.

@mxgrey
Copy link
Member

mxgrey commented Oct 3, 2017

Are we assuming that these constraint types must be used with a specific data set? Suppose I wanted to create an avatar of myself which accurately reflects my own flexibility (which may be more or less flexible than the data source that you used in your research project), is that something I could do using this pull request, as long as I have normal human-like joints? Now suppose I wanted an avatar of my dog, and I have collected motion capture data for him and fitted some functions using a neural network to describe his joint limits. What would be stopping me from using your constraint class to simulate those dog-like fitted functions, besides the fact that the choice of joints is hard-coded? (These are genuine questions, not rhetorical questions.)

It also strikes me that tiny_dnn is not nearly as tiny as the name would suggest. Perhaps it's tiny relative to most neural network libraries, but making it a part of mainline DART would be an enormous addition relative to the size of DART, especially when considering how very specific its application is. Should we really put the general population of DART users in a position where they need to store and compile that much code when it's only necessary for a very specific application which they might not even be interested in?

I would propose that these constraint types (along with tiny_dnn) should not be part of the DART repo, but instead should be in its own repo, like dartsim/human_joint_constraints (naming is of course open for discussion). It can be compiled as its own library, and we might even want to package it and make it available over apt/Homebrew, but we should not distribute it as a canonical default part of DART.

And I want to be clear: My feeling on this says nothing about the quality of the content. I've watched the video and skimmed the paper, and I fully believe it's very valuable, high-quality work. But its application is very specific, especially in its current implementation, and we should not underestimate the burden of saddling users with bytes that they are unlikely to use. Software bloat can become a seriously debilitating issue if gone unchecked.

I am happy to explore whatever changes need to be made to DART to allow these constraints to be used from an external library. I suspect those changes would be relatively minor and very desirable for the general population of DART users. I can also offer guidance on how to set your classes up in their own repo as an external library, although I may be a little slow on that since I have some projects right now that I'm trying to finish up soon.

If the implementations of these constraint types can be made more generic (and especially if the direct dependency on tiny_dnn can be removed) then I'd be happy to reconsider merging it into the main DART repo. Until then, I think it makes more sense as an external library.

If anyone disagrees with my evaluation, I'll gladly reconsider my stance.

@jslee02
Copy link
Member

jslee02 commented Oct 3, 2017

I generally agree with @mxgrey. One another possible solution would be that we keep the constraints in this repo but use tiny-dnn as an external library like any other libraries DART depends on.

One problem with doing this is that tiny-dnn is not available through any of package managers (e.g., apt-get and homebrew). To resolve this, we could either (1) we ask tiny-dnn developers to consider using those package managers or (2) we do the work ourself (by using a PPA and uploading homebrew formulae to a tab or even to the mainstream of homebrew). I think we could start with (2) and hope (1) will happen in the future. I believe many other projects using tiny-dnn might have the same thought that including "not tiny" tiny-dnn may not an good idea.

@mxgrey
Copy link
Member

mxgrey commented Oct 3, 2017

Even without the tiny-dnn dependency, I would like to see a more generic way of expressing these joint constraints before making it a canonical part of DART. I fear a repeat of the situation we currently have with soft bodies (edit: and also planning) where the implementation that we're providing as a canonical part of DART is deceptively application-specific.

@jslee02
Copy link
Member

jslee02 commented Oct 3, 2017

I would like to see a more generic way of expressing these joint constraints before making it a canonical part of DART.

👍 I also have the same question of your first paragraph of this comment before adding more thought on whether we want to have this PR in this repo or outside repo.

I fear a repeat of the situation we currently have with soft bodies (edit: and also planning) where the implementation that we're providing as a canonical part of DART is deceptively application-specific.

I'm also considering splitting planning component from a canonical part of DART as it's not maintained and our group is developing motion planning component in a separate project, Aikido. However, I'm not clear that why do you think soft body also should be out of the core of DART.

@mxgrey
Copy link
Member

mxgrey commented Oct 3, 2017

I agree that planning should be removed. We decided to hang onto it a few years ago because at least one of Mike's students was still using it, but I don't believe it has any more users now, so I think we can slate it for removal in the next major release.

I don't think soft bodies should be removed from DART, but I do believe they should be replaced with a soft body framework that's more comprehensive and general. What we currently have is better than not having any soft body support at all, so we should certainly keep it until we can put together a replacement.

@jyf588
Copy link
Author

jyf588 commented Oct 3, 2017

Thanks for the inputs here Grey. I greatly appreciate it. I tend to agree that this work might better be external to DART. The burden from tiny-dnn is high considering it is only used in one application. Let me talk to Karen and see how she feels about it.

@jslee02
Copy link
Member

jslee02 commented Nov 12, 2017

It seems we have two major issues here:

  • Issue 1: tiny-dnn is too large to have in DART repo
  • Issue 2: HumanArmJointLimitConstraint and HumanArmJointLimitConstraint are application specific

I think these issues are solvable.

Issue 1 can be resolved by making tiny-dnn as build dependency as suggested by @mxgrey. I guess we could this a similar approach to Impl. But we still make tiny-dnn available in the building process. For this, I think we should:

  1. make tiny-dnn as debian and Homebrew packages hosting them on dartsim PPA and dartsim/homebrea-dart, respectively.
  2. add tiny-dnn package as DART's dependency

Issue 2 also can be resolved by revising the new constraints to be general to work with any set of DegreeOfFreedoms (or Joints), which seem to be doable. Necessary changes would be to:

  1. create a new joint constraint (replacing HumanArmJointLimitConstraint and HumanArmJointLimitConstraint) that takes a set of DegreeOfFreedoms and a file path to the learned neural network parameters.
  2. add new tags to Skel file for loading the new joint constraint under
  3. modify SkelParser to load the new tags

The above description is pretty simplified but we could continue to discuss the details.

My only unclear part in making the joint constraint class generic is the input and output of the neural network in Human[Arm/Leg]JointLimitContraint classes. The data type is simply tiny_dnn::vec_t, but I'm not sure (a) how to compute the input from the joint positions and velocities and (b) how to use the output to compute the constraint Jacobian. Specifically, this and this code for (a) and this and this code for (b).

Could you point out which part should I look at in your paper or simply explain them here for me? In particular, I wonder where those sin/cos come from.

@jyf588
Copy link
Author

jyf588 commented Nov 13, 2017

Let me first try to explain this line:

qz is one DoF(joint angle). Though ranging in (-inf, inf), it is actually circular - all the 2k\pi is the same configuration. Neural Nets dislike inputs in (-inf, inf), so we inform the net by inputting cos(qz) and sin(qz) instead of qz itself.

All the other DoF's (e.g. qx) are limited in \pi range - they distribute only on a semicircle rather than a full circle as qz. In this case, we replace qz with cos(qz) so that all the inputs to the NN would be in range [-1,1], making the training easier.

This is the consequence of the above. Since we input sin(q) instead of q, we need an additional chain rule when calculating gradients.

This is simply because we train the NN on left arm/leg, but would like them to work for the right limbs as well. So we take the mirror value of some joint angles and then input to the Nets.

Thanks for composing a blueprint for rebuilding this pull request. I thought about your suggestions and agree that the main unclear part would be the pre/post-processing steps discussed above. With the goal of making the class more generalizable, could it be helpful to make the pre/post-processing steps as override functions? I could imagine every JointLimit classes should have slightly different steps (like whether to use sin or cos etc.) Surely I am no expert here, just thinking this should be taken care of.

@jslee02
Copy link
Member

jslee02 commented Feb 26, 2018

@jyf588 Thanks for the explanation. I think the constraints need to be more generalized in order to be a canonical part of DART. Let's revisit this issue. Instead, I moved the application specific code to the example directory in #1016.

Closing in favor of #1016

@jslee02 jslee02 closed this Feb 26, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants