Shape Matching for Object Manipulation Skill Transfer

Một phần của tài liệu Efficient dense registration,segmentation, and modeling methods for RGB d environment perception (Trang 180 - 183)

Objects with the same function often share a common topology of functional parts such as handles and tool-tips (Tenorth et al., 2013). We propose to in- terpret shape correspondences as correspondences between the functional parts.

We utilize these correspondences for object manipulation skill transfer.

In many object manipulation scenarios, controllers can be specified through grasp poses and 6-DoF trajectories relative to the functional parts of an object.

With known correspondences of the functional parts, these grasps and motions are transferable to other object instances.

In Ch. 5, we propose an efficient deformable registration method that provides a dense displacement field between object shapes observed in RGB-D images.

From the displacements, local transformations can be estimated between points on the object surfaces. We apply these local transformations to transfer grasps and motion trajectories between the objects, which are defined relative to the objects and their functional parts (illustrated in Fig. 7.9).

7.2. Shape Matching for Object Manipulation Skill Transfer

7.2.1. Grasp Transfer

We define a grasp as a 6-DoF end-effector pose xexamplegrasp relative to a reference frame of the example object. When a new instance with different shape is given, we estimate a displacement field between both shapes using deformable registration (Ch. 5). The grasp pose is transformed onto the new object to a posexnewgrasp using the displacement field.

For the registration, we assume that the new object instance is segmented from its surrounding, e.g., using a plane segmentation approach (Holz et al., 2011). We represent the RGB-D image segment in a MRSMap. The orientation of the new instance needs to coarsely match with the example object. As an intialization step for the registration, the MRSMaps are brought into coarse pose alignment by moving their spatial means onto each other. Deformable registration between the MRSMaps then yields a displacement field v.

Since the new object is only partially visible, we register the smaller map of the new object onto the multi-view model of the example object. I.e., in the formalism of our deformable registration, the new object is the model and the example object the scene. The method in Sec. 5.3.2 is the appropriate choice to estimate the local deformation Tepexamplegrasp from the example object to the new object, where pexamplegrasp is the position of the grasp on the example object. The grasp pose on the new object is

Txnewgrasp=Tepexamplegrasp Txexamplegrasp . (7.11)

7.2.2. Motion Transfer

We express the usage of an object through the motion of a reference frame of the object. If the object is a tool that affects another object, it is often useful to define this reference frame at the tool’s end-effector. We transfer the reference frame to a new object through deformable registration, and execute the same motion with this frame as for the example object. The reference frame is a pose xexampleref on the example object. Its counterpart xnewref on the new object is found through local deformation

Txnewref =Tepexampleref Txexampleref (7.12) The local transformation Tepexampleref is determined at the reference frame’s example position pexampleref .

The example motion of the reference frame is given as a trajectory Θexampleref =

θexampleref,0 , . . . , θrefexample,T which typically starts at the current pose of the reference frame. If the object is used as a tool on an affected object, the end of the trajec- tory is constrained through the affected object. The motion can be parametrized in dependence on the pose of the affected object.

We make the trajectory relative to the start pose, i.e.,

Θbexampleref =θbexampleref,0 , . . . ,θbrefexample,T , (7.13) with Tθbrefexample,t =T−1θrefexample,0 Tθexampleref,t . The corresponding trajectory for the new object is then Θnewref =θrefnew,tT

t=0 where

Tθnewref,t=Tθnewref,0Tθbrefexample,t . (7.14) The start pose of the reference frame for the new object can be found from the local deformation from example to new object,

Tθnewref,0=Tepexampleref Tθrefexample,0 . (7.15) We choose the start pose of the trajectory for the transformation, as it is close to the object surface. By this, we intend that the displacement field estimate at the reference pose is well supported by data evidence.

If multiple motions of a rigid object are concatenated in a sequence, it is not necessary to perceive the deformation at each start of a motion. E.g., if we assume the grasps to be fixed during all motions, we can initially store the reference frames used relative to the grasp poses on the new object, and recover the reference frames from the current grasp poses at the beginning of each motion.

The robot does not directly move the reference frame, but generates object motion with its end-effectors that act on the object through the grasp poses.

To generate the desired reference frame motion, the robot end-effectors that grasp the object are thus moving on a trajectory Θnewgrasp =θgrasp,tnew T

t=0 that is constrained relative to the reference frame. We assume rigidness of the object instances such that the relative pose of the grasp towards the reference frame remains constant, i.e., for allt and t0,

T−1θrefnew,t Tθgrasp,tnew =T−1θrefnew,t0

Tθnewgrasp,t0

. (7.16)

This allows for writing

Tθnewgrasp,t=Tθrefnew,0Tθbrefexample,t T−1θrefnew,0 Tθgrasp,0new . (7.17) Also the start pose of the grasp is given through the local deformation from example to new object,

Tθgrasp,0new =Tepexamplegrasp Tθexamplegrasp,0 . (7.18) Clearly, our approach assumes the object instances themselves to be rigid, and cannot consider dynamics or complex causalities involved in the execution of a task. Releasing these restrictions is a potential path for future research.

Một phần của tài liệu Efficient dense registration,segmentation, and modeling methods for RGB d environment perception (Trang 180 - 183)

Tải bản đầy đủ (PDF)

(244 trang)