Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add FeetechMotorsBus, SO-100, Moss-v1 #419

Merged
merged 61 commits into from
Oct 25, 2024
Merged
Show file tree
Hide file tree
Changes from 48 commits
Commits
Show all changes
61 commits
Select commit Hold shift + click to select a range
dfdfeab
Add feetech (WIP)
Cadene Sep 4, 2024
9d23d04
WIP
Cadene Sep 6, 2024
2826c22
2 fixes
Cadene Sep 17, 2024
50c4b51
TO REMOVE, before merging
Cadene Sep 17, 2024
0035bb9
Merge branch 'user/rcadene/2024_09_04_feetech' of github.com:huggingf…
Sep 17, 2024
89da9f7
Added a configuration script that can be used for feetech and dynamix…
Sep 17, 2024
1c1882e
Added find_motor_bus_ports.py
Sep 17, 2024
aa22946
Removed configuration and port finding code from classes no longer ne…
Sep 17, 2024
5aa2a88
Merge branch 'main' of github.com:huggingface/lerobot into HEAD
Sep 17, 2024
be5336f
Made script to save camera images instead of these funcitons existing…
Sep 20, 2024
34dcd05
Made script to save camera images instead of these funcitons existing…
Sep 20, 2024
8b51a20
WIP to remove
Cadene Sep 23, 2024
62651cb
WIP
Cadene Sep 25, 2024
6ca54a1
WIP
Cadene Oct 3, 2024
b07f91b
Refactor record
Cadene Oct 12, 2024
7115fef
fix
Cadene Oct 12, 2024
9f5c586
fix unit test
Cadene Oct 13, 2024
63bd501
overall improve, fix some issues with events, add some tests for events
Cadene Oct 13, 2024
517a261
reset_time_s=1 in tests
Cadene Oct 13, 2024
b1fd099
TOREMOVE: isolate test
Cadene Oct 13, 2024
71245e3
python --version
Cadene Oct 13, 2024
876fcc4
setup python 3.10 before
Cadene Oct 13, 2024
1f0bdec
test
Cadene Oct 13, 2024
d02e204
test
Cadene Oct 13, 2024
eed7b55
Fix unit tests
Cadene Oct 13, 2024
daf18dc
isolate tests
Cadene Oct 13, 2024
20f5ac3
remove test isolation
Cadene Oct 13, 2024
904aaa4
Refactor -> control_loop()
Cadene Oct 14, 2024
94c3a48
small fix
Cadene Oct 14, 2024
99414f3
Handle fps=None in control_loop
Cadene Oct 14, 2024
87fef10
small fix
Cadene Oct 14, 2024
e039496
small fix unit tests
Cadene Oct 14, 2024
cde4945
small fix unit tests
Cadene Oct 14, 2024
091177d
Update .github/workflows/test.yml
Cadene Oct 15, 2024
3960125
Update lerobot/common/robot_devices/robots/manipulator.py
Cadene Oct 15, 2024
cb30d7a
Do not override fps
Cadene Oct 15, 2024
19d410a
fix unit tests
Cadene Oct 15, 2024
29f6abc
Add safe_disconnect to replay
Cadene Oct 15, 2024
cfa5ce0
Merge remote-tracking branch 'origin/user/rcadene/2024_09_04_feetech'…
Cadene Oct 16, 2024
eeea3e5
merge but remove refactor of save_camera_images
Cadene Oct 16, 2024
79ac1ad
fix teleop
Cadene Oct 16, 2024
1990f9c
make it work
Cadene Oct 16, 2024
994209d
Refactor to have dynamixel_calibration and feetech_calibration
Cadene Oct 18, 2024
48da694
auto calibration works
Cadene Oct 19, 2024
bee2b3c
Update tutorial
Cadene Oct 19, 2024
1d92acf
fix
Cadene Oct 20, 2024
ea6b27d
Fix configure, autocalibration, Add media
Cadene Oct 23, 2024
68d7ab9
Merge remote-tracking branch 'origin/main' into user/rcadene/2024_09_…
Cadene Oct 23, 2024
67b28e1
first auto-review
Cadene Oct 23, 2024
2b558df
Add mock to feetech
Cadene Oct 23, 2024
5d64ba5
fix unit tests
Cadene Oct 24, 2024
4d03ece
fix unit tests
Cadene Oct 24, 2024
8d62464
isolate tests
Cadene Oct 24, 2024
3368e8c
move mock_calibration_dir in utils
Cadene Oct 24, 2024
35dd9f8
Revert "isolate tests"
Cadene Oct 24, 2024
55a499a
Add 11_use_moss.md, Clean, Add policy/env yaml
Cadene Oct 24, 2024
b71ca0a
Address review with Simon
Cadene Oct 24, 2024
90b2f08
Add youtube link
Cadene Oct 25, 2024
e60da8b
Update README
Cadene Oct 25, 2024
10c8c8a
revert to default koch
Cadene Oct 25, 2024
a8f48bc
Make koch_bimanual a robot_type
Cadene Oct 25, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
208 changes: 208 additions & 0 deletions examples/10_use_so100.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,208 @@
This tutorial explains how to use [SO-100](https://github.com/TheRobotStudio/SO-ARM100) with LeRobot.

## Source the parts

Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with link to source the parts, as well as the instructions to 3D print the parts, and advices if it's your first time printing or if you don't own a 3D printer already.

**Important**: Before assembling, you will first need to configure your motors. To this end, we provide a nice script, so let's install LeRobot. We will next provide a tutorial for assembly.

## Install LeRobot

On your computer:

1. [Install Miniconda](https://docs.anaconda.com/miniconda/#quick-command-line-install):
```bash
mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
~/miniconda3/bin/conda init bash
```

2. Restart shell or `source ~/.bashrc`

3. Create and activate a fresh conda environment for lerobot
```bash
conda create -y -n lerobot python=3.10 && conda activate lerobot
```

4. Clone LeRobot:
```bash
git clone https://github.com/huggingface/lerobot.git ~/lerobot
```

5. Install LeRobot with dependencies for the feetech motors:
```bash
cd ~/lerobot && pip install -e ".[feetech]"
Cadene marked this conversation as resolved.
Show resolved Hide resolved
```

For Linux only (not Mac), install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"
```

## Configure the motors

Run this script two times to find the ports (e.g. "/dev/tty.usbmodem58760432961") of your motor buses:
```bash
python lerobot/scripts/find_motors_bus_port.py
```

Then plug your first motor, corresponding to "shoulder_pan" and run this script to set its ID to 1 and set its present position and offset to ~2048 (useful for calibration).
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 1
```

Then unplug your motor and plug the second motor, corresponding to "shoulder lift", and set its ID to 2.
```bash
python lerobot/scripts/configure_motor.py \
--port /dev/tty.usbmodem58760432961 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 2
```

Redo the process for all your motors until the gripper with ID 6. Do the same for the motors of the leader arm, starting for ID 1 up to 6.

## Assemble the arms

TODO

## Calibrate

```bash
python lerobot/scripts/control_robot.py calibrate \
--robot-path lerobot/configs/robot/so100.yaml \
--robot-overrides '~cameras'
```

## Teleoperate

Without displaying the cameras:
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/so100.yaml \
--robot-overrides '~cameras' \
--display-cameras 0
```

With displaying the cameras:
```bash
python lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/so100.yaml
```

## Record a dataset

Once you're familiar with teleoperation, you can record your first dataset with so100.

If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
```bash
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
```

Store your Hugging Face repository name in a variable to run these commands:
```bash
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
```

Record 2 episodes and upload your dataset to the hub:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/so100.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/so100_test \
--tags so100 tutorial \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 2 \
--push-to-hub 1
```

## Visualize a dataset

If you uploaded your dataset to the hub with `--push-to-hub 1`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
```bash
echo ${HF_USER}/so100_test
```

If you didn't upload with `--push-to-hub 0`, you can also visualize it locally with:
```bash
python lerobot/scripts/visualize_dataset_html.py \
--root data \
--repo-id ${HF_USER}/so100_test
```

## Replay an episode

Now try to replay the first episode on your robot:
```bash
DATA_DIR=data python lerobot/scripts/control_robot.py replay \
--robot-path lerobot/configs/robot/so100.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/so100_test \
--episode 0
```

## Train a policy

To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
```bash
DATA_DIR=data python lerobot/scripts/train.py \
dataset_repo_id=${HF_USER}/so100_test \
policy=act_so100_real \
env=so100_real \
hydra.run.dir=outputs/train/act_so100_test \
hydra.job.name=act_so100_test \
device=cuda \
wandb.enable=true
```

Let's explain it:
1. We provided the dataset as argument with `dataset_repo_id=${HF_USER}/so100_test`.
2. We provided the policy with `policy=act_so100_real`. This loads configurations from [`lerobot/configs/policy/act_so100_real.yaml`](../lerobot/configs/policy/act_so100_real.yaml). Importantly, this policy uses 2 cameras as input `laptop`, `phone`.
3. We provided an environment as argument with `env=so100_real`. This loads configurations from [`lerobot/configs/env/so100_real.yaml`](../lerobot/configs/env/so100_real.yaml).
4. We provided `device=cuda` since we are training on a Nvidia GPU, but you can also use `device=mps` if you are using a Mac with Apple silicon, or `device=cpu` otherwise.
5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
6. We added `DATA_DIR=data` to access your dataset stored in your local `data` directory. If you dont provide `DATA_DIR`, your dataset will be downloaded from Hugging Face hub to your cache folder `$HOME/.cache/hugginface`. In future versions of `lerobot`, both directories will be in sync.

Training should take several hours. You will find checkpoints in `outputs/train/act_so100_test/checkpoints`.

## Evaluate your policy

You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
```bash
python lerobot/scripts/control_robot.py record \
--robot-path lerobot/configs/robot/so100.yaml \
--fps 30 \
--root data \
--repo-id ${HF_USER}/eval_act_so100_test \
--tags so100 tutorial eval \
--warmup-time-s 5 \
--episode-time-s 40 \
--reset-time-s 10 \
--num-episodes 10 \
-p outputs/train/act_so100_test/checkpoints/last/pretrained_model
```

As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
1. There is an additional `-p` argument which indicates the path to your policy checkpoint with (e.g. `-p outputs/train/eval_so100_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `-p ${HF_USER}/act_so100_test`).
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `--repo-id ${HF_USER}/eval_act_so100_test`).

## More

Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth explaination.

If you have any question or need help, please reach out on Discord in the channel `#so100-arm`.
5 changes: 3 additions & 2 deletions examples/7_get_started_with_real_robot.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ To begin, create two instances of the [`DynamixelMotorsBus`](../lerobot/common/

To find the correct ports for each arm, run the utility script twice:
```bash
python lerobot/common/robot_devices/motors/dynamixel.py
python lerobot/scripts/find_motors_bus_port.py
```

Example output when identifying the leader arm's port (e.g., `/dev/tty.usbmodem575E0031751` on Mac, or possibly `/dev/ttyACM0` on Linux):
Expand Down Expand Up @@ -544,7 +544,8 @@ To instantiate an [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/openc

To find the camera indices, run the following utility script, which will save a few frames from each detected camera:
```bash
python lerobot/common/robot_devices/cameras/opencv.py \
python lerobot/scripts/save_images_from_cameras.py \
--driver opencv \
Cadene marked this conversation as resolved.
Show resolved Hide resolved
--images-dir outputs/images_from_opencv_cameras
```

Expand Down
2 changes: 1 addition & 1 deletion examples/8_use_stretch.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ cd ~/lerobot && pip install -e ".[stretch]"

> **Note:** If you get this message, you can ignore it: `ERROR: pip's dependency resolver does not currently take into account all the packages that are installed.`

And install extra dependencies for recording datasets on Linux:
For Linux only (not Mac), install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
Expand Down
2 changes: 1 addition & 1 deletion examples/9_use_aloha.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ git clone https://github.com/huggingface/lerobot.git ~/lerobot
cd ~/lerobot && pip install -e ".[dynamixel, intelrealsense]"
```

And install extra dependencies for recording datasets on Linux:
For Linux only (not Mac), install extra dependencies for recording datasets:
```bash
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
Expand Down
6 changes: 4 additions & 2 deletions lerobot/common/robot_devices/cameras/intelrealsense.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@
from lerobot.common.robot_devices.utils import (
RobotDeviceAlreadyConnectedError,
RobotDeviceNotConnectedError,
busy_wait,
)
from lerobot.common.utils.utils import capture_timestamp_utc
from lerobot.scripts.control_robot import busy_wait

SERIAL_NUMBER_INDEX = 1

Expand Down Expand Up @@ -200,7 +200,9 @@ class IntelRealSenseCamera:

To find the camera indices of your cameras, you can run our utility script that will save a few frames for each camera:
```bash
python lerobot/common/robot_devices/cameras/intelrealsense.py --images-dir outputs/images_from_intelrealsense_cameras
python lerobot/scripts/save_images_from_cameras.py \
--driver intelrealsense \
--images-dir outputs/images_from_intelrealsense_cameras
```

When an IntelRealSenseCamera is instantiated, if no specific config is provided, the default fps, width, height and color_mode
Expand Down
6 changes: 4 additions & 2 deletions lerobot/common/robot_devices/cameras/opencv.py
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,9 @@ class OpenCVCamera:

To find the camera indices of your cameras, you can run our utility script that will be save a few frames for each camera:
```bash
python lerobot/common/robot_devices/cameras/opencv.py --images-dir outputs/images_from_opencv_cameras
python lerobot/scripts/save_images_from_cameras.py \
--driver opencv \
--images-dir outputs/images_from_opencv_cameras
```

When an OpenCVCamera is instantiated, if no specific config is provided, the default fps, width, height and color_mode
Expand Down Expand Up @@ -323,7 +325,7 @@ def connect(self):
if self.camera_index not in available_cam_ids:
raise ValueError(
f"`camera_index` is expected to be one of these available cameras {available_cam_ids}, but {self.camera_index} is provided instead. "
"To find the camera index you should use, run `python lerobot/common/robot_devices/cameras/opencv.py`."
"To find the camera index you should use, run `python lerobot/scripts/save_images_from_cameras.py --driver opencv`."
)

raise OSError(f"Can't access OpenCVCamera({camera_idx}).")
Expand Down
Loading
Loading