Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Elephant Robotics MyArm M&C #506

Open
wants to merge 29 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
1c13002
Add tutorial for usage
apockill Nov 13, 2024
697eef5
Add yet-to-be-implemented myarm wrappers
apockill Nov 13, 2024
1fa1887
Add myarm.yaml initial implementation
apockill Nov 13, 2024
8772926
Add myarm env
apockill Nov 13, 2024
d5b8d76
Add act config
apockill Nov 13, 2024
18aa9b1
Update `my_arm` extra to include `pynput`
apockill Nov 16, 2024
b39538a
Add better next-steps for first time users
apockill Nov 16, 2024
c37081e
Improve out-of-bounds motor behavior when enabling torque
apockill Nov 16, 2024
8d145d3
Ignore calibration for `MyArm`
apockill Nov 16, 2024
b9b3d7e
Teleop safety stop doesn't seem to do anything for the follower robot
apockill Nov 16, 2024
a8758ff
Fix missing method
apockill Nov 16, 2024
19ccf30
Re-lock poetry after updating acton_ai
apockill Nov 16, 2024
d04c7fd
Re-lock poetry after updating acton_ai
apockill Nov 18, 2024
3321d3a
Switch to np.float32, after discovering that's what the policy requires
apockill Nov 18, 2024
62a466d
Update type hints
apockill Nov 18, 2024
c916232
Allow lerobot to disable torque
apockill Nov 18, 2024
eae8737
Run pre-commit
apockill Nov 19, 2024
e433454
Upload actual tested configuration
apockill Nov 21, 2024
f78880a
Revert FPS passthrough change
apockill Nov 21, 2024
180042b
Resolve conflicts
apockill Nov 30, 2024
8640fb5
Set up myarm_real.yaml fps to match myarm.yaml
apockill Nov 30, 2024
76559a1
Merge branch 'main' into feature/add-myarm-support
apockill Dec 4, 2024
62cffb5
Add `top` camera, increase the resolution
apockill Dec 5, 2024
17084f1
Remember to also normalize the top camera
apockill Dec 6, 2024
06a7d96
Minor policy improvements
apockill Dec 8, 2024
58455db
Merge branch 'main' into feature/add-myarm-support
apockill Dec 10, 2024
5e29914
Merge branch 'main' into feature/add-myarm-support
apockill Jan 3, 2025
3c9669e
Initial attempt at diffusion policy
apockill Jan 6, 2025
41f35e9
Merge branch 'huggingface:main' into feature/add-myarm-support
apockill Jan 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions examples/12_use_myarm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
This tutorial explains how to use [Moss v1](https://github.com/jess-moss/moss-robot-arms) with LeRobot.

## Sourcing a MyArm

A MyArm can be purchased at the [Elephant Robotics M&C Store Section](https://shop.elephantrobotics.com/collections/myarm-mc)

**Important** Make sure your MyArmM and MyArmC firmware is up to date. The
firmware can be updated using the MyStudio app.

## Install LeRobot

This tutorial was tested on a Linux machine.

In the LeRobot repository, install the dependencies including the `myarm` extra:
```bash
poetry install --extras my_arm
```

## Verify Teleop Without LeRobot First

1. Plug in both the MyArm M and MyArm C via USB to your computer.
Ensure the following:
- both power supplies are plugged in
- the e-stop is disabled
- the MyArmM power button is pressed

**IMPORTANT:**
If you try to connect and it fails, then you find out it was an e-stop or other power issue, you will NEED to unplug the USB, restart the robot, fix the power issue, and then plug the USB back in.

2. Then, in both `MyArmM` and `MyArmC` interfaces select "Transponder -> USB UART".

**Find USB ports associated to your arms**

The `acton_ai` library will automatically scan for likely arms and connect
to them. It will automatically identify which is the Mover and which is the Controller.

You can also hardcode the ports in the `lerobot/configs/robot/myarm.yaml` file.

If you see exceptions relating to finding or connecting to arms, **read them**.

The acton_ai library will provide descriptive error messages if it cannot find or connect to the arms.

Troubleshooting: On Linux, you might need to give access to the USB ports by running:
```bash
sudo chmod a+rw /dev/ttyACM*
```

3. Finally, run the teleop script to verify that you can control the arms:
```bash
acton_teleop
```

Move the leader around and verify that the follower arm follows, and is accurate.

If anything seems amis, follow the calibrate section below. This script will
log debug issues relating to joints out of bounds, and other problems that might occur.
Pay attention to the logs and fix any issues before proceeding.

## Calibrate

If anything seems amiss, run
```bash
acton_calibrate
```

and follow directions in the console. This script is always being maintained, please add
issues on the `acton_ai` github so we can improve it.

## Teleoperate

**Simple teleop**
Then you are ready to teleoperate your robot via lerobot! Run this simple script (it won't connect and display the cameras):
```bash
python3 lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/myarm.yaml \
--robot-overrides '~cameras' \
--display-cameras 0
```


**Teleop with displaying cameras**
Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
```bash
python3 lerobot/scripts/control_robot.py teleoperate \
--robot-path lerobot/configs/robot/myarm.yaml
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might need some help here. I found that cv2.imshow froze the system, I had to comment it out.

That's definitely not caused by anything in this PR, but I'd like to fix that so other's don't run into the same issue 😁

```

# What Next?

Well, now your robot is compatible with lerobot! Check out other tutorials for how to train a policy, visualize a dataset, train a model, and execute a model policy using a robot!

You can start out by checking out the other commands `control_robot` supports using the `--help` menu, such as `record`!
115 changes: 115 additions & 0 deletions lerobot/common/robot_devices/motors/myarm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
from abc import ABC, abstractmethod
from enum import Enum
from typing import Any, Literal

import acton_ai
import numpy as np
import numpy.typing as npt


class TorqueMode(Enum):
ENABLED = 1
DISABLED = 0


class MyArmBaseClass(ABC):
_handle: acton_ai.HelpfulMyArmC | acton_ai.HelpfulMyArmM

def __init__(
self,
port: str,
motors: dict[str, tuple[int, str]],
extra_model_control_table: dict[str, list[tuple]] | None = None,
extra_model_resolution: dict[str, int] | None = None,
mock=False,
):
self.port = port
self.motors = motors
self.mock = mock

@property
def motor_names(self) -> list[str]:
return list(self.motors.keys())

@abstractmethod
def connect(self) -> None:
pass

@abstractmethod
def disconnect(self) -> None:
pass

def set_up_presets(self) -> None:
"""Optional. Override this method to do further set up on a device."""
return

def set_calibration(self, _: Any) -> None:
"""Calibration is not implemented on LeRobot. Use the MyArm interface."""
return


class MyArmLeader(MyArmBaseClass):
_handle: acton_ai.HelpfulMyArmC
"""Instantiated in `connect`"""

def connect(self) -> None:
self._handle = acton_ai.find_myarm_controller()

def disconnect(self) -> None:
"""There is no need to disconnect from the MyArmC"""
pass

def read(self, cmd: Literal["Present_Position"]) -> npt.NDArray[np.float32]:
"""This should mirror 'write' in the order of the data"""
match cmd:
case "Present_Position":
joint_angles = self._handle.get_joint_angles_in_mover_space()
return np.array(joint_angles, dtype=np.float32)
case _:
raise ValueError(f"Unsupported {cmd=}")

def write(self, cmd: Literal["Torque_Enable"], data: npt.NDArray[np.float32]) -> None:
"""Nothing needs doing here"""
match cmd:
case "Torque_Enable":
# Doesn't make sense, but it does get requested to enable torque
pass
case _:
raise ValueError(f"Unexpected write to follower arm {cmd=}")


class MyArmFollower(MyArmBaseClass):
_handle: acton_ai.HelpfulMyArmM
"""Instantiated in `connect`"""

def connect(self) -> None:
self._handle = acton_ai.find_myarm_motor()

def disconnect(self) -> None:
self._handle.set_robot_power_off()

def read(self, cmd: Literal["Present_Position"]) -> npt.NDArray[np.float32]:
"""This should mirror 'write' in the order of the data"""
match cmd:
case "Present_Position":
joint_angles = self._handle.get_joints_angle()
return np.array(joint_angles, dtype=np.float32)
case _:
raise ValueError(f"Unsupported {cmd=}")

def write(
self, cmd: Literal["Goal_Position", "Torque_Enable"], data: npt.NDArray[np.float32] | int
) -> None:
"""This should mirror 'read' in the order of the data"""
# TODO: Implement
match cmd:
case "Goal_Position":
self._handle.set_joints_from_controller_angles(data.tolist(), speed=20)
case "Torque_Enable":
if data == TorqueMode.ENABLED.value:
self._handle.bring_up_motors()
self._handle.prompt_user_to_bring_motors_into_bounds()
elif data == TorqueMode.DISABLED.value:
self._handle.set_servos_enabled(False)
case _:
raise ValueError(f"Unsupported {cmd=}")
18 changes: 17 additions & 1 deletion lerobot/common/robot_devices/robots/manipulator.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def __setattr__(self, prop: str, val):
super().__setattr__(prop, val)

def __post_init__(self):
if self.robot_type not in ["koch", "koch_bimanual", "aloha", "so100", "moss"]:
if self.robot_type not in ["koch", "koch_bimanual", "aloha", "so100", "moss", "myarm"]:
raise ValueError(f"Provided robot type ({self.robot_type}) is not supported.")


Expand Down Expand Up @@ -304,6 +304,8 @@ def connect(self):
from lerobot.common.robot_devices.motors.dynamixel import TorqueMode
elif self.robot_type in ["so100", "moss"]:
from lerobot.common.robot_devices.motors.feetech import TorqueMode
elif self.robot_type == "myarm":
from lerobot.common.robot_devices.motors.myarm import TorqueMode

# We assume that at connection time, arms are in a rest position, and torque can
# be safely disabled to run calibration and/or set robot preset configurations.
Expand All @@ -321,6 +323,8 @@ def connect(self):
self.set_aloha_robot_preset()
elif self.robot_type in ["so100", "moss"]:
self.set_so100_robot_preset()
elif self.robot_type == "myarm":
self.set_myarm_robot_preset()

# Enable torque on all motors of the follower arms
for name in self.follower_arms:
Expand Down Expand Up @@ -378,6 +382,9 @@ def load_or_run_calibration_(name, arm, arm_type):
)

calibration = run_arm_manual_calibration(arm, self.robot_type, name, arm_type)
elif self.robot_type == "myarm":
# MyArm calibration should be done using the MyArm interface
calibration = {}

print(f"Calibration is done! Saving calibration file '{arm_calib_path}'")
arm_calib_path.parent.mkdir(parents=True, exist_ok=True)
Expand Down Expand Up @@ -501,6 +508,15 @@ def set_so100_robot_preset(self):
self.follower_arms[name].write("Maximum_Acceleration", 254)
self.follower_arms[name].write("Acceleration", 254)

def set_myarm_robot_preset(self):
for name, robot in self.follower_arms.items():
logging.info(f"Setting up {name} follower arm.")
robot.set_up_presets()

for name, robot in self.leader_arms.items():
logging.info(f"Setting up {name} leader arm.")
robot.set_up_presets()

def teleop_step(
self, record_data=False
) -> None | tuple[dict[str, torch.Tensor], dict[str, torch.Tensor]]:
Expand Down
10 changes: 10 additions & 0 deletions lerobot/configs/env/myarm_real.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
# @package _global_

fps: 30

env:
name: myarm
task: null
state_dim: 7
action_dim: 7
fps: ${fps}
103 changes: 103 additions & 0 deletions lerobot/configs/policy/act_myarm_real.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
# @package _global_

# Use 'act_myarm_real.yaml' to train real-world datasets collected on Elephant Robotics
# MyArm robots.

# Compared to `act.yaml`, it contains 2 cameras (i.e. top, wrist) instead of 1 camera (i.e. top).
# Also, `training.eval_freq` is set to -1. This config is used to evaluate checkpoints at a certain frequency of training steps.
# When it is set to -1, it deactivates evaluation. This is because real-world evaluation is done through our `control_robot.py` script.
# Look at the documentation in header of `control_robot.py` for more information on how to collect data , train and evaluate a policy.
#
# Example of usage for training:
# ```bash
# python lerobot/scripts/train.py \
# policy=act_myarm_real \
# env=myarm_real
# ```

seed: 1000
dataset_repo_id: ???

override_dataset_stats:
observation.images.wrist:
# stats from imagenet, since we use a pretrained vision model
mean: [[[0.485]], [[0.456]], [[0.406]]] # (c,1,1)
std: [[[0.229]], [[0.224]], [[0.225]]] # (c,1,1)
observation.images.top:
# stats from imagenet, since we use a pretrained vision model
mean: [ [ [ 0.485 ] ], [ [ 0.456 ] ], [ [ 0.406 ] ] ] # (c,1,1)
std: [ [ [ 0.229 ] ], [ [ 0.224 ] ], [ [ 0.225 ] ] ] # (c,1,1)

training:
offline_steps: 80000
online_steps: 0
eval_freq: -1
save_freq: 10000
log_freq: 100
save_checkpoint: true

batch_size: 8
lr: 1e-5
lr_backbone: 1e-5
weight_decay: 1e-4
grad_clip_norm: 10
online_steps_between_rollouts: 1

delta_timestamps:
action: "[i / ${fps} for i in range(${policy.chunk_size})]"

eval:
n_episodes: 50
batch_size: 50

# See `configuration_act.py` for more details.
policy:
name: act

# Input / output structure.
n_obs_steps: 1
chunk_size: 100
n_action_steps: 100

input_shapes:
observation.images.wrist: [3, 480, 640]
observation.images.top: [3, 480, 640]
observation.state: ["${env.state_dim}"]
output_shapes:
action: ["${env.action_dim}"]

# Normalization / Unnormalization
input_normalization_modes:
observation.images.wrist: mean_std
observation.images.top: mean_std
observation.state: mean_std
output_normalization_modes:
action: mean_std

# Architecture.
# Vision backbone.
vision_backbone: resnet18
pretrained_backbone_weights: ResNet18_Weights.IMAGENET1K_V1
replace_final_stride_with_dilation: false
# Transformer layers.
pre_norm: false
dim_model: 512
n_heads: 8
dim_feedforward: 3200
feedforward_activation: relu
n_encoder_layers: 4
# Note: Although the original ACT implementation has 7 for `n_decoder_layers`, there is a bug in the code
# that means only the first layer is used. Here we match the original implementation by setting this to 1.
# See this issue https://github.com/tonyzhaozh/act/issues/25#issue-2258740521.
n_decoder_layers: 1
# VAE.
use_vae: true
latent_dim: 32
n_vae_encoder_layers: 4

# Inference.
temporal_ensemble_coeff: null

# Training and loss computation.
dropout: 0.1
kl_weight: 10.0
Loading