7.11.1. place.PlaceObjectServer
- class place.PlaceObjectServer
Bases:
objectServer for placing an object on a plane
The server assumes that the object is able to be placed on the same surface that it was standing on when it was grasped (i.e. bottom surface). The server uses the Toyota placement area detector service to detect valid placement areas for the object. The service takes the object dimensions and object rotation after placement into account to find a suitable placement area. The server also executes the placement action by planning a path which leads to the object being placed ‘downwards’ onto the table.
- tf2_wrapper
Convenience wrapper for tf2_ros
- Type:
TF2Wrapper
- moveit
Convenience wrapper for MoveIt
- Type:
- hsr_wrapper
Convenience wrapper for the Toyota HSR
- Type:
- server
Action server for the place_object action
- Type:
actionlib.SimpleActionServer
- bb_vis
Visualizer for bounding boxes. Used to visualize an (enlarged) bounding box around the target placement area and an estimation of the object rotation after placement
- Type:
RvizVisualizer
- Parameters:
placement_area_bb (grasping_pipeline_msgs/BoundingBox3DStamped) – The bounding box of the target placement area/plane. If None or a empty message, the server will choose the closest plane instead
table_plane_equations (list of object_detector_msgs/Plane) – The plane equations of the detected tables. The equations are used to calculate the plane normals, which are then used to define ‘upwards/downwards’ for the object placement.
table_bbs (vision_msgs/BoundingBox3DArray) – The bounding boxes of the detected tables. Used to find the actual bounding box of the target placement area/plane
placement_surface_to_wrist (geometry_msgs/Transform) – Transformation from the object placement surface to the wrist of the robot. This is generally the center of the object plane which touched the table plane, but can also be the transformation to another surface plane of the object (which would lead to the object being placed on that plane). This is needed to calculate the correct object dimensions and orientation for the placement area detection and to calculate the correct placement point for the robot.
- Returns:
Empty. The server can be set to succeeded or aborted. It is set to succeeded if the object was successfully placed and to aborted if the object could not be placed on any of the detected placement areas.
- Return type:
grasping_pipeline_msgs/PlaceActionResult
- __init__()
Initialize the PlaceObjectServer class.
Creates a TF2Wrapper, MoveitWrapper, HSR_wrapper, and an action server for the place_object action.
Methods
__init__()Initialize the PlaceObjectServer class.
add_marker(pose_goal[, id, r, g, b])Add a marker to the RViz visualization.
call_placement_area_detector(obj_bb_aligned, ...)Call the Toyota placement area detector service to detect valid placement areas for the object.
detect_placement_areas(wrist_to_table, ...)Detect valid placement areas for the object.
execute(goal)Execute the place_object action.
find_intersecting_table_plane(table_planes, bb)Uses the Separating Axis Theorem to calculate if the bb is intersecting with a table plane.
Calculates the orientation of the object bounding box that it will have after placement.
Get the transformation from the object placement surface to the wrist
sort_closest_plane(table_plane_equations, ...)Sort the table planes and table bbs by distance to the robot base.
sort_placement_areas_by_distance(...[, reverse])Sort the placement areas by distance to the passed pose.
transform_plane_normal(table_equation, ...)Transform the plane normal to the target frame and normalize it
- add_marker(pose_goal, id=0, r=0, g=1, b=0)
Add a marker to the RViz visualization. The topic is grasping_pipeline/placement_marker.
The marker is an arrow that points in the direction of the z-axis of the pose_goal.
- Parameters:
pose_goal (geometry_msgs/PoseStamped) – The pose to visualize
id (int) – The id of the marker. Creating a new marker with the same id will overwrite the old marker.
r (float) – The red color value of the marker
g (float) – The green color value of the marker
b (float) – The blue color value of the marker
- call_placement_area_detector(obj_bb_aligned, placement_area_det_frame, target_point, box_filter_range)
Call the Toyota placement area detector service to detect valid placement areas for the object.
The placement area detector looks for suitable placement areas on the table where the object can be placed. The service takes the object dimensions and object rotation after placement into account for finding collision-free placement areas. The service looks for areas inside the box_filter_range around the target_point on the table.
- Parameters:
obj_bb_aligned (vision_msgs/BoundingBox3D) – The bounding box of the object aligned to the base frame
placement_area_det_frame (str) – The frame in which the placement area detection should be done
target_point (geometry_msgs/Point) – A rough estimation of a point on the table (best if its middle point)
box_filter_range (geometry_msgs/Vector3) – Filters pointcloud in x,y,z direction around target point (if x_range = 0.2: filters out everything that is not inside target_point +- 0.1)
- Returns:
The detected placement areas. As long as no placement areas are found, the service will be called again and again (aka endless loop might happen)
- Return type:
list of tmc_placement_area_detector/PlacementArea
- detect_placement_areas(wrist_to_table, base_pose, aligned_table_bb_base_frame, placement_area_det_frame, placement_area_bb)
Detect valid placement areas for the object.
Uses the Toyota placement area detector service to detect the placement areas. It takes the object dimension and object rotation after placement into account to find a suitable placement area.
- Parameters:
wrist_to_table (geometry_msgs/Transform) – Transformation from the wrist to the table
base_pose (geometry_msgs/Pose) – The current pose of the robot base
aligned_table_bb_base_frame (vision_msgs/BoundingBox3D) – The bounding box of the table aligned to the base frame (This means that the BB x-axis will be the one that aligns the best with the x-axis of the base frame, and so on for the other axes)
placemend_area_det_frame (str) – The frame in which the placement area detection should be done
placement_area_bb (vision_msgs/BoundingBox3D) – The bounding box of the target placement area/plane
- Returns:
The detected placement areas
- Return type:
list of tmc_placement_area_detector/PlacementArea
- execute(goal)
Execute the place_object action.
In the first step the table plane that intersects with the target placement area is found (the target placement area is the bounding box of the table which is read from the config file. This is sometimes a couple of centimeters off from the currently detected tables, because the map is never identical for each run. Therefore we look for the table that matches the closest with the one we expect from the config file)
Then the placement areas are detected. The placement areas are the areas on the table where the object can be placed. The placement areas are detected by the Toyota placement area service. The service takes the object dimensions and object rotation after placement into account to find a suitable placement area.
The placement areas are sorted by distance to the robot base (farthest are tried first so that the object is placed as far away into the shelf as possible).
The bounding boxes are all aligned to the base frame (i.e. the coordinate frames are rotated in such a way that all axes are aligned with all axes from the base frame). This makes it easier to calculate what part of the objects and table are in front/left/right of the robot.
Afterwards the robot tries all placement areas one by one until it succeeds or all placement areas have been tried. The robot plans a path based on two waypoints: the first waypoint is the actual placement point and the second waypoint is directly above the placement point. This leads to a motion where the object is placed “downwards” onto the table.
- Parameters:
goal (grasping_pipeline_msgs/PlaceActionGoal) – The goal of the place_object action. Contains the bounding box of the target placement area, bounding boxes and plane equations of detected tables, and the transformation from the object placement surface to the wrist (transformation from the center of the object plane which touched the table plane, i.e. the ‘bottom’ surface of the object).
- find_intersecting_table_plane(table_planes, bb)
Uses the Separating Axis Theorem to calculate if the bb is intersecting with a table plane. Returns the index of one of the table planes that the bounding box is intersecting with.
- Parameters:
table_planes (list of object_detector_msgs/Plane) – The plane equations of the table planes which are checked for intersection with the bb
bb (o3d.geometry.OrientedBoundingBox) – The bounding box which is checked for intersection with the table planes
- Returns:
The index of the table plane that the bounding box is intersecting with. None if no intersection is found
- Return type:
int or None
- get_object_bb_orientation_after_placement(base_pose, attached_object_bb, wrist_to_table, aligned_table_bb_base_frame)
Calculates the orientation of the object bounding box that it will have after placement. This is needed so that we can pass the correct object dimensions and orientation for the placement area detector service, so that it can find collision-free placement areas.
- Parameters:
base_pose (geometry_msgs/Pose) – The current pose of the robot base
attached_object_bb (vision_msgs/BoundingBox3D) – The bounding box of the object that is attached to the robot (aka the object to be placed)
wrist_to_table (geometry_msgs/Transform) – Transformation from the wrist to the table
aligned_table_bb_base_frame (vision_msgs/BoundingBox3D) – The bounding box of the table aligned to the base frame
- Returns:
The bounding box of the object aligned to the map frame (which is the frame that the placement area detector service uses for the detection). Only the orientation of the bounding box should be used, the translation can not be estimated because it is not known on which part of the table the object will be placed. It’s only known how the object will be rotated after placement.
- Return type:
vision_msgs/BoundingBox3D
- get_surface_to_wrist_transform(object_placement_surface_pose)
Get the transformation from the object placement surface to the wrist
- Parameters:
object_placement_surface_pose (geometry_msgs/Pose) – The pose of the object placement surface relative to the wrist coordinate frame (i.e. the ‘bottom’ surface of the object)
- Returns:
The transformation from the object placement surface to the wrist
- Return type:
np.array of shape (4, 4)
- sort_closest_plane(table_plane_equations, table_bbs, base_pose_map)
Sort the table planes and table bbs by distance to the robot base. The closest table plane is at the first index.
- Parameters:
table_plane_equations (List[object_detector_msgs.Plane]) – The plane equations of the detected tables
table_bbs (vision_msgs/BoundingBox3DArray) – The bounding boxes of the detected tables
base_pose_map (geometry_msgs/PoseStamped) – The current pose of the robot base (base_link) in the map frame
- Returns:
List[object_detector_msgs.Plane] – The sorted table planes
vision_msgs/BoundingBox3DArray – The sorted table bounding boxes
- sort_placement_areas_by_distance(placement_areas, goal_pose, reverse=True)
Sort the placement areas by distance to the passed pose.
- Parameters:
placement_areas (list of vision_msgs/BoundingBox3D) – The placement areas which should be sorted
goal_pose (geometry_msgs/PoseStamped) – The goal pose. The placement areas are sorted by distance to this pose.
reverse (bool) – If True, the placement areas are sorted in descending order (farthest first). If False, the placement areas are sorted in ascending order (closest first). Default is True.
- Returns:
The sorted placement areas
- Return type:
list of vision_msgs/BoundingBox3D
- transform_plane_normal(table_equation, target_frame, stamp)
Transform the plane normal to the target frame and normalize it
- Parameters:
table_equation (object_detector_msgs/Plane) – The plane equation
target_frame (str) – The target frame to transform the plane normal to
stamp (rospy.Time) – The timestamp of the transformation
- Returns:
The normalized plane normal in the target frame
- Return type:
np.array of shape (3,)