[Project page] • [Paper] • [Model] • [Data]
Prabin Kumar Rath1,
Nakul Gopalan1
1Arizona State University
- 🦾 Real-world rollouts
- 🛠️ Installation
- 🛝 Try it out
- 🩹 Add a new manipulator
- 🐢 ROS package
- 🏷️ License
- 🙏 Acknowledgement
- 📝 Citation
All rollouts shown below use XMoP with a fixed set of frozen policy parameters which were completely trained on synthetic (robots and environments
) planning demonstration data.
The system has been tested on Ubuntu 22.04 with Intel 19 12th Gen CPU, 64GB RAM, and NVIDIA RTX 3090 GPU. Clone the source from github using the below command.
git clone https://github.com/prabinrath/xmop.git -b main
Install docker
Install docker from this tutorial.
Install nvidia-container-toolkit
from this tutorial. It is recommended to reboot
your system after installation of these software.
Download image
The docker image comes pre-configured with all required dependencies for data generation, training, inference, and benchmarking.
cd <xmop-root-directory>
bash xmop_dev_docker.sh
If you are getting a Permission denied
error, then use sudo
before bash
.
Exit container
ctrl+d
System dependencies
sudo apt install -y libgl1-mesa-glx libgl1-mesa-dri \
liborocos-kdl-dev libkdl-parser-dev liburdfdom-dev libnlopt-dev libnlopt-cxx-dev \
git wget
Setup conda env
conda create -n xmop_dev python=3.10.13
conda activate xmop_dev
cd <xmop-root-directory>
bash setup_xmop_conda_dev_env.sh
The conda env needs to be deactivated and activated again to set the PYTHONPATH
for xmop.
conda deactivate
conda activate xmop_dev
OMPL (optional)
Ompl is required for data generation and baseline experiments. Install ompl with python bindings from here. Setting up OMPL can be challenging and tricky to configure manually. To simplify the process, we recommend using our docker container, which comes pre-configured with required dependencies.
Download datasets and benchmark assets
cd <xmop-root-directory>
bash download_resources.sh all
For each of the following demos, you can change the value of URDF_PATH
variable to run on different robots.
Run XMoP planning demo
python examples/multistep_collisionfree_rollout.py
Run XMoP-S reaching demo
python examples/singlestep_reaching_rollout.py
Run XCoD collision detection demo
python examples/real_robot_collision_detection.py
Run whole-body pose reconstruction with XCoD out-of-distribution collision generalization. This example shows frames attached to each link, which are the whole-body link poses for our control policy, while the colored pointcloud shows the output of XCoD collision detection.
python examples/xmop_reconstruction_xcod_ood.py
Run XCoD Ompl hybrid planning demo
python examples/ompl_xcod_hybrid_planning.py
Data generation can be run on separate systems for consecutive fragments. The (<start-idx>
, <end-idx>
) refer to the MpiNets dataset indices. For our data generation we ran 32 consecutive fragments independently each with 100,000 problems. For example, (0, 100k), (100k, 200k), ..., (3m, 3.27m).
Generate XMoP planning demonstration dataset
python data_gen/data_gen_planning.py --start_idx <start-idx> --end_idx <end-idx> --num_proc <number-of-cpus-to-use>
Merge dataset fragments to generate a cohesive dataset, uncomment line (26-28) in the script below and run it
python data_gen/merge_and_visualize_traj_dataset.py
Some problems might still remain unsolved, hence retry solving them
python data_gen/data_gen_retry_unsolved_planning.py --start_idx <start-idx> --end_idx <end-idx> --num_proc <number-of-cpus-to-use>
Once planning demonstrations are generated, next generate collision detection dataset
python data_gen/data_gen_collision.py --start_idx <start-idx> --end_idx <end-idx> --num_proc <number-of-cpus-to-use>
Use -h
flag to see the parameters.
Train XMoP diffusion policy
python training/train_xmop.py
Train XMoP-S reaching policy
python training/train_xmop_s.py
Train XCoD collision model
python training/train_xcod.py
XMoP is a zero-shot policy that generalizes to unseen manipulators within a distribution. While setup for 7 commercial robots are provided, you can add new manipulators by following these steps:
- Modify the new robot's URDF
- Add
base_link
andgripper_base_target
dummy frames to the URDF, see existing robot descriptions in theurdf/
folder for reference. - Save the modified file with a name ending in
_sample.urdf
.
- Add
- Update the
RealRobotPointSampler
Class- In
common/robot_point_sampler.py
, add a name keyword for the new manipulator to the constructor of theRealRobotPointSampler
class. - Examples:
sawyer
for the Sawyer robot,kinova6
for the 6-FoF Kinova robot, and so on.
- In
- Configure link groups by modifying the
config/robot_point_sampler.yaml
file. Each robot requires four fields:- semantic_map: Specify a numerical ID for each URDF link. XCoD, considers links with the same ID are a single link.
- ee_links: List IDs corresponding to end-effector links.
- home_config: Define the home joint configuration of the robot.
- pose_skip_links: List of link names that should not be considered for pose-tokens. The pre-trained XMoP policy currently allows max 8 pose-tokens supporting 6-DoF and 7-DoF manipulators. Hence, this list needs to be specified accordingly to ingore kinematic chain branches and redundant links.
By following these steps, you can successfully add new manipulators to be controlled with the pre-trained XMoP policy, expanding its capabilities to work with a wider range of robots. If you successfully add a new robot, please consider raising a PR!
Note: The pre-trained XMoP policy does not generalize to out-of-distribution robots that have very different scale or morphology compared to the synthetic embodiment distribution it was trained on. For such novel class of robots, please follow our data generation scripts to design in-distribution synthetic robots to train XMoP from scratch.
package for real-world deployment of XMoP along with usage tutorial can be found here.
This repository is released under the MIT license. See LICENSE for additional details.
MpiNets • Diffusion Policy • Diffusion Transformer
If you find this codebase useful in your research, please cite the XMoP paper:
@article{rath2024xmop,
title={XMoP: Whole-Body Control Policy for Zero-shot Cross-Embodiment Neural Motion Planning},
author={Prabin Kumar Rath and Nakul Gopalan},
year={2024},
eprint={2409.15585},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2409.15585},
}