This package combines object recognition, object tracking, point cloud processing and visual servoing to achieve visual servoing for objects without prior model. This pacakge can be used on robotic arm with RGBD sensor attached to the end effector.
Object Recognition --> Obejct Tracking --> Point Cloud Extraction -> Object Pose Estimation --> Visual Servoing
Object recognition and object tracking work together to enable reliable and uninterrupted tracking of the object of interested. With the bounding box of the tracked object in RGB image, program can extract the point cloud of the object from the structured point cloud from RGBD sensor using the boundng box from RGB image tracking.
At the beginning of a visual servoing session, a template of the object can be captured and cached for object pose estimation which can be estimated by ICP or particle filter from PCL. Position based visual servoing node can set a target pose relative to the estimated pose of the object and do visual servoing.
- ROS Packages
- PCL
- OpenCV 3.4.3 with tracking module
- Boost
- VISP
- and a package for driving robot
- Realsense D435 RGBD camera
- Kinova Mico arm
- roslaunch arm_vs calibrate_mico_eye_in_hand.launch
- keep arUco tag fixed and move camera frame to capture tag at different orientation
- keep camera traslation at minimum and maximize camera rotation
- check calibration file under ~/.ros/easy_hand_eye/
- roslaunch arm_vs pbvs_aruco_tag.launch
- roslaunch arm_vs track.launch
- rosservice call /mico_interaction/switch_controllers "controllers_to_be_turned_on: 'velocity' to reset particle filter if pose estimation does not improve