Skip to content

Commit

Permalink
Add MANNAutoregressive example
Browse files Browse the repository at this point in the history
  • Loading branch information
GiulioRomualdi committed Nov 27, 2023
1 parent 6ad91e1 commit 46711c4
Show file tree
Hide file tree
Showing 3 changed files with 291 additions and 0 deletions.
237 changes: 237 additions & 0 deletions examples/MANN/MANNAutoregressive.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,237 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# MANNAutoregressive\n",
"This notebook contains the code showing how to use MANNAutoregressive to plan a waking trajectory for ergoCub humanoid robot. \n",
"\n",
"If you are interested in some details of the algorithm, please refer to the paper: [ADHERENT: Learning Human-like Trajectory Generators for Whole-body Control of Humanoid Robots](https://github.com/ami-iit/paper_viceconte_2021_ral_adherent)\n",
"\n",
"## Import all the required packages\n",
"In this section we import all the required packages. In particular, we need `iDynTree` to correctly visualize the robot, `numpy` to perform some basic operations, `manifpy` to perform some basic operations on manifolds, `resolve_robotics_uri_py` to correctly locate the `ergoCub` model. Finally `bipedal_locomotion_framework` implements the MANN network."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e357295-6730-47d6-9119-bd09ed598e45",
"metadata": {},
"outputs": [],
"source": [
"from idyntree.visualize import MeshcatVisualizer\n",
"import idyntree.bindings as idyn\n",
"from pathlib import Path\n",
"import numpy as np\n",
"import manifpy as manif\n",
"import bipedal_locomotion_framework as blf\n",
"import resolve_robotics_uri_py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Prepare the ingredients\n",
"To correctly run the notebook, we need to prepare the ingredients. In particular, we need to:\n",
"- Get the network model: In this case we use the model trained on the ergoCub robot. The model is available [here](https://huggingface.co/ami-iit/mann/resolve/main/ergocubSN000_26j_49e.onnx).\n",
"- Load the configuration file: The configuration file contains all the parameters needed to correctly run the network and generate a proper set of input the the network. In detail we have two sets of configuration files:\n",
" - `config_mann.toml`: This file contains the parameters needed to correctly load and run the network\n",
" - `config_joypad.toml`: This file contains the parameters needed to correctly generate the input for the network. In particular, it contains the parameters needed to correctly map a joypad input to the network input.\n",
"- Load the robot model: In this case we use the `ergoCub` model. We load the model only to correctly visualize the robot.\n",
"\n",
"### Get the network model\n",
"What we need to do is to download the network model and save it in the `config` folder. The model is available [here](https://huggingface.co/ami-iit/mann/resolve/main/ergocubSN000_26j_49e.onnx). \n",
"Moreover we load the parameters needed to correctly run the network."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "08029010-6814-4101-af0f-ac2940560eac",
"metadata": {},
"outputs": [],
"source": [
"# We use urllib to download the onnx model from huggingface\n",
"import urllib.request\n",
"import os\n",
"url = \"https://huggingface.co/ami-iit/mann/resolve/main/ergocubSN000_26j_49e.onnx\"\n",
"urllib.request.urlretrieve(url, \"./config/ergocubSN000_26j_49e.onnx\")\n",
"\n",
"# Get the configuration files\n",
"config_path = Path(\"__file__\").parent / \"config\" / \"config_mann.toml\"\n",
"params_network = blf.parameters_handler.TomlParametersHandler()\n",
"params_network.set_from_file(str(config_path))\n",
"\n",
"joypad_config_path = Path(\"__file__\").parent / \"config\" / \"config_joypad.toml\"\n",
"params_joypad = blf.parameters_handler.TomlParametersHandler()\n",
"params_joypad.set_from_file(str(joypad_config_path))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Load ergoCub model\n",
"We load the `ergoCub` model to correctly visualize the robot.\n",
"The model is loaded using `iDynTree` and `resolve_robotics_uri_py` to correctly locate the model. Please notice that the model is loaded specifying the same joint order used to train the network."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e3817e56-9e8d-464e-9f0b-6577a2c0f2fe",
"metadata": {},
"outputs": [],
"source": [
"# Get the path of the robot model\n",
"robot_model_path = str(resolve_robotics_uri_py.resolve_robotics_uri(\"package://ergoCub/robots/ergoCubSN001/model.urdf\"))\n",
"ml = idyn.ModelLoader()\n",
"ml.loadReducedModelFromFile(robot_model_path, params_network.get_parameter_vector_string(\"joints_list\"))\n",
"viz = MeshcatVisualizer()\n",
"viz.load_model(ml.model())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Network initialization\n",
"In this section we initialize the network. In particular we first instantiate the network and then we load the parameters. The parameters are loaded from the `config_mann.toml` file."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Create the trajectory generator\n",
"mann_trajectory_generator = blf.ml.MANNAutoregressive()\n",
"assert mann_trajectory_generator.set_robot_model(ml.model())\n",
"assert mann_trajectory_generator.initialize(params_network)\n",
"\n",
"# Create the input builder\n",
"input_builder = blf.ml.MANNAutoregressiveInputBuilder()\n",
"assert input_builder.initialize(params_joypad)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We now need to reset the network to the initial state. In particular, we need to reset the joint state and the base pose in a given configuration already seen by the network during traning."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Initial joint positions configuration. The serialization is specified in the config file\n",
"joint_positions = np.array([-0.10922017141063572, 0.05081325960010118, 0.06581966291990003, -0.0898053099824925, -0.09324922528169599, -0.05110058859172172,\n",
" -0.11021232812838086, 0.054291515925228385,0.0735575862560208, -0.09509332143185895, -0.09833823347493076, -0.05367281245082792,\n",
" +0.1531558711397399, -0.001030634273454133, 0.0006584764419034815,\n",
" -0.0016821925351926288, -0.004284529460797688, 0.030389771690123243,\n",
" -0.040592118429752494, -0.1695472679986807, -0.20799422095574033, 0.045397975984119654,\n",
" -0.03946672931050908, -0.16795588539580256, -0.20911090583076936, 0.0419854257806720])\n",
"\n",
"# Initial base pose. This pose makes the robot stand on the ground with the feet flat\n",
"initial_base_height = 0.7748\n",
"base_pose = manif.SE3.Identity()\n",
"quat = np.array([ 0, -0.0399893, 0, 0.9992001])\n",
"quat = quat / np.linalg.norm(quat)\n",
"base_pose = manif.SE3([0, 0, initial_base_height], quat)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We finally ask the input build to generate an input that mimics a joypad input. In particular, we ask the input builder to generate an input that mimics a joypad input with the axes moved to the following values:\n",
"- `left_stick_x`: 1.0\n",
"- `left_stick_y`: 0.0\n",
"- `right_stick_x`: 1.0\n",
"- `right_stick_y`: 0.0"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"input_builder_input = blf.ml.MANNDirectionalInput()\n",
"input_builder_input.motion_direction = np.array([1, 0])\n",
"input_builder_input.facing_direction = np.array([1, 0])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Network execution\n",
"We now visualize the robot in the viewer and we run the network for 100 steps. In particular, we run the network for 100 steps and we visualize the robot at each step."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"viz.jupyter_cell()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reset the trajectory generator\n",
"mann_trajectory_generator.reset(joint_positions, base_pose)\n",
"for i in range(200):\n",
"\n",
" # Set the input to the builder and\n",
" input_builder.set_input(input_builder_input)\n",
" assert input_builder.advance()\n",
" assert input_builder.is_output_valid()\n",
"\n",
" # Set the input to the trajectory generator\n",
" mann_trajectory_generator.set_input(input_builder.get_output())\n",
" assert mann_trajectory_generator.advance()\n",
" assert mann_trajectory_generator.is_output_valid()\n",
"\n",
" # Get the output of the trajectory generator and update the visualization\n",
" mann_output = mann_trajectory_generator.get_output()\n",
" viz.set_multibody_system_state(mann_output.base_pose.translation(),\n",
" mann_output.base_pose.rotation(),\n",
" mann_output.joint_positions)\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.12"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
10 changes: 10 additions & 0 deletions examples/MANN/config/config_joypad.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
base_vel_norm = 0.4
ellipsoid_forward_axis = 1.0
ellipsoid_side_axis = 0.9
ellipsoid_backward_axis = 0.6
ellipsoid_scaling_factor = 0.4
max_facing_direction_angle_forward = 0.20
max_facing_direction_angle_backward = 0.1
max_facing_direction_angle_side_opposite_sign = 0.26
max_facing_direction_angle_side_same_sign = 0.17
number_of_knots = 7
44 changes: 44 additions & 0 deletions examples/MANN/config/config_mann.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@

joints_list = ['l_hip_pitch', 'l_hip_roll', 'l_hip_yaw', 'l_knee', 'l_ankle_pitch', 'l_ankle_roll', # left leg
'r_hip_pitch', 'r_hip_roll', 'r_hip_yaw', 'r_knee', 'r_ankle_pitch', 'r_ankle_roll', # right leg
'torso_pitch', 'torso_roll', 'torso_yaw', # torso
'neck_pitch', 'neck_roll', 'neck_yaw', # neck
'l_shoulder_pitch', 'l_shoulder_roll', 'l_shoulder_yaw', 'l_elbow', # left arm
'r_shoulder_pitch', 'r_shoulder_roll', 'r_shoulder_yaw', 'r_elbow'] # right arm

root_link_frame_name = "root_link"
chest_link_frame_name = "chest"
left_foot_frame_name = "l_sole"
right_foot_frame_name = "r_sole"
sampling_time = 0.02
time_horizon = 2.0
slow_down_factor = 5
forward_direction = "x"
scaling_factor = 1.0

[LEFT_FOOT]
number_of_corners = 4
corner_0 = [0.08, 0.03, 0.0]
corner_1 = [0.08, -0.03, 0.0]
corner_2 = [-0.08, -0.03, 0.0]
corner_3 = [-0.08, 0.03, 0.0]
on_threshold = 0.02
off_threshold = 0.02
switch_on_after = 0.02
switch_off_after = 0.02


[RIGHT_FOOT]
number_of_corners = 4
corner_0 = [0.08, 0.03, 0.0]
corner_1 = [0.08, -0.03, 0.0]
corner_2 = [-0.08, -0.03, 0.0]
corner_3 = [-0.08, 0.03, 0.0]
on_threshold = 0.02
off_threshold = 0.02
switch_on_after = 0.02
switch_off_after = 0.02

[MANN]
projected_base_horizon = 12
onnx_model_path = "./config/ergocubSN000_26j_49e.onnx"

0 comments on commit 46711c4

Please sign in to comment.