1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

CRC Press - Robotics and Automation Handbook Episode 2 Part 6 pptx

15 210 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 15
Dung lượng 2 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Recently, a set of algorithms using symmetry for reconstruction from a single image has been developed [25].. The main idea is to use the so-called equivalent images encoded in a single

Trang 1

A Survey of Geometric Vision 22-13

22.3.2.1 Coplanar Features

The treatment for coplanar point features is similar to the general case Assume the equation of the plane

in the first camera frame is [π T

1,π2]X= 0 with π1∈ R3andπ2∈ R By simply appending the 1 × 2 block [π T

1x1,π2] to the end of M in Equation (22.12), the rank condition in Theorem 22.1 still holds Since

π1= N is the unit normal vector of the plane and π2 = d is the distance from the first camera center to the plane, the rank condition implies d( x i R i x1)− (N T x1)(x i T i)= 0, which is obviously equivalent to

the homography between the i th and the first views (see Equation (22.8)) As for reconstruction, we can

use the four-point algorithm to initialize the estimation of the homography and then perform a similar iteration scheme to obtain motion and structure The algorithm can be found in [36, 51]

22.3.3 Further Readings

22.3.3.1 Multilinear Constraints and Factorization Algorithm

There are two other approaches dealing with multiple-view reconstruction The first approach is to use

the so-called multilinear constraints on multiple images of a 3-D point or line For small number of views,

these constraints can be described in terms of tensorial notations [23, 52, 53] For example, the constraints

for m = 3 can be described using trifocal tensors For large number of views (m ≥ 5), the tensor is difficult

to describe The reconstruction is then to calculate the trifocal tensors first and factorize the tensors for camera motions [2] An apparent disadvantage is that it is hard to choose the right “three-view sets” and also difficult to combine the results Another approach is to apply some factorization scheme to iteratively estimate the structure and motion [23, 40, 57], which is in the same spirit as Algorithm 22.2

22.3.3.2 Universal Multiple-View Matrix and Rank Conditions

The reconstruction algorithm in this section was only for point features Algorithms have also been designed for lines features [35, 56] In fact the multiple-view rank condition approach can be extended to all different types of features such as line, plane, mixed line, point and even curves This leads to a set of

rank conditions on a universal multiple-view matrix For details please refer to [38, 40].

22.3.3.3 Dynamical Scenes

The constraint we developed in this section is for static scene If the scene is dynamic, i.e., there are moving

objects in the scene, a similar type of rank condition can be obtained This rank condition is obtained by incorporating the dynamics of the objects in 3-D space into their own descriptions and lifting the 3-D moving points into a higher dimensional space in which they are static For details please refer to [27, 40]

22.3.3.4 Orthographic Projection

Finally, note that the linear algorithm and the rank condition are for the perspective projection model If the scene is far from the camera, then the image can be modeled using orthographic projection, and the Tomasi-Kanade factorization method can be applied [59] Similar factorization algorithm for other types

of projections and dynamics have also been developed [8, 21, 48, 49]

22.4 Utilizing Prior Knowledge of the Scene - Symmetry

In this section we study how to incorporate scene knowledge into the reconstruction process In our daily life, especially in a man-made environment, there exist all types of “regularity.” For objects, regular shapes such as rectangle, square, diamond, and circle always attract our attention For spatial relationship between objects, orthoganality, parallelism, and similarity are the conspicuous ones Interestingly, all the

above regularities can be described using the notion of symmetry For instance, a rectangular window

has one rotational symmetry and two reflective symmetry; the same windows on the same wall have translational symmetry; the corner of a cube displays rotational symmetry

Trang 2

22-14 Robotics and Automation Handbook

22.4.1 Symmetric Multiple-View Rank Condition

There are many studies using instances of symmetry in the scene for reconstruction purposes [1, 3, 6, 19,

25, 28, 70, 73, 74] Recently, a set of algorithms using symmetry for reconstruction from a single image

has been developed [25] The main idea is to use the so-called equivalent images encoded in a single image

of a symmetric object Figure 22.7 illustrates this notion for the case of a reflective symmetry We attach

an object coordinate frame to the symmetric object and set it as the reference frame 3-D points X and

Xare related by some symmetric transformation g (in the object frame) with g (X) = X If the image

is obtained from viewpoint O, then the image of Xcan be interpreted as the image of X viewed from

the virtual viewpoint Othat is the correspondence of O under the same symmetry transformation g

The image of Xis called an equivalent image of X viewed from O Therefore, given a single image of a symmetric object, we have multiple equivalent images of this object The number of all equivalent images

is the number of all symmetry of the object

In modern mathematics, symmetry of an object are characterized by a symmetry group with each element

in the group representing a transformation under which the object is invariant [25, 68] For example, a rectangle possesses two reflective symmetry and one rotational symmetry We can use the group theoretic notation to define a 3-D symmetric object as in [25]

Definition 22.1 Let S be a set of 3-D points It is called a symmetric structure if there exists a non-trivial subgroup G of the Euclidean group E (3) acting on S such that for any g ∈ G, g defines an isomorphism from S to itself G is called the symmetry group of S.

N X

O

O ′

g

g0

g0

g¢ = g0gg0–1

X ′

FIGURE 22.7 X and Xare corresponding points under reflective symmetry transformation g (expressed in the

object frame) such that X= g(X) N is the normal vector of the mirror plane The motion between the camera frame

and the object frame is g0 Hence, the image of Xin real camera can be considered as the image of X viewed by a

virtual camera with pose g0g with respect to the object frame or g0g g0−1with respect to the real camera frame.

Trang 3

A Survey of Geometric Vision 22-15

Under this definition, the possible symmetry on a 3-D object are reflective symmetry, translational

symmetry, rotational symmetry, and any combination of them For any point p ∈ S on the object, its symmetric correspondence for g ∈ G is g(p) ∈ S The images of p and g(p) are denoted as x and g(x).

Let the symmetry transformation in the object frame be g = [R, T] ∈ G (R ∈ O(3) and T ∈ R3)

As illustrated inFigure 22.7,if the transformation from the object (reference) frame to the real camera

frame is g0= [R0 , T0], the transformation from the reference frame to the virtual camera frame is g0g

Furthermore, the transformation from the real camera frame to the virtual camera frame is

g= g0 g g−10 = [R, T]=

R0RR T0,

I − R0 RR0T

For the symmetric object S, assume its symmetry group G has m elements g i = [R i , T i ], i = 1, 2, , m Then the transformation between the i th virtual camera and the real camera is gi = [R

i , T i] (i = 1, 2, , m)

as can be calculated from Equation (22.19) Given any point X ∈ S with image x and its equivalent images

g i (x)’s, we can define the symmetric multiple-view matrix

M(x i)=



g1(x)R1x g1(x)T1



g2(x)R2x g2(x)T

2



g m (x)R mx gm (x)T

m

According to Theorem 22.1, it satisfies the symmetric multiple-view rank condition:

22.4.2 Reconstruction from Symmetry

Using the symmetric multiple-view rank condition and a set of n symmetric correspondences (points), we can solve for g i= [R

i , T i] and the structure of the points in S in the object frame using an algorithm similar

to Algorithm 22.2 However, a further step is still necessary to recover g0 = [R0, T0] for the camera pose with respect to the object frame This can be done by solving the following Lyapunov type of equations:

g ig0− g0 g i = 0, or R

i R0− R0 R = 0 and T0= (I − R)† (T− R0 T ) (22.22)

Since g iand g i are known, g0can be solved The detailed treatment can be found in [25, 40]

As can be seen in the example at the end of this section, symmetry-based reconstruction is very accurate and requires a minimum amount of data A major reason is that the baseline between the real camera and the virtual one is often large due to the symmetry transformation Therefore, degenerate cases will occur if the camera center is invariant under the symmetry transformation For example, if the camera lies on the mirror plane for a reflective symmetry, the structure can only be recovered up to some ambiguities [25] The reconstruction for some special cases can be simplified without explicitly using the symmetric multiple-view rank condition (e.g., see [3]) For the reflective symmetry, the structure with respect to the camera frame can be calculated using only two pairs of symmetric points Assume the image of two pairs

of reflective points are x, xand y, y Then the image line connecting x and xis obtained by l xxx

Similarly, l y connecting y and ysatisfies l yyy It can be shown that the unit normal vector N (see Figure 22.7) of the reflection plane satisfies



l T x

l T y



from which N can be solved Assume the depth of x and xareλ and λ, then they can be calculated using

Trang 4

22-16 Robotics and Automation Handbook

the following relationship



Nx −Nx

N T x N T x

 

λ

λ



=

 0

2d



(22.24)

where d is the distance from the camera center to the reflection plane.

22.4.2.1 Planar Symmetry

Planar symmetric objects are widely present in man-made environment Regular shapes such as square and rectangle are often good landmarks for mapping and recognition tasks For a planar symmetric object,

its symmetry forms a subgroup G of E (22 2) instead of E (22.3) Let g ∈ E (2) be one symmetry of a

planar symmetric object Recall that there exists a homography H0between the object frame and camera

frame Also between the original image and the equivalent image generated by g , there exists another homography H It can be shown that H = H0 g H0−1 Therefore all the homographies generated from the

equivalent images form a homography group H0 G H0−1, which is conjugate to G [3] This fact can be used

in testing if an image is the image of an object with desired symmetry Given the image, by calculating the homographies from all equivalent images based on the hypothesis, we can check the group relationship

of the homography group and decide if the desired symmetry exists [3] In case the object is a rectangle,

the calculation can be further simplified using the notion of vanishing point as illustrated in the following

example

Example 22.3 (Symmetry-based reconstruction for a rectangular object.)

For a rectangle in 3-D space, the two pairs of parallel edges generate two vanishing points v1and v2in the

image As a 3-D vector, v i (i= 1, 2) can also be interpreted as the vector from the camera center to the

vanishing point and hence must be parallel to the pair of parallel edges Therefore, we must have v1 ⊥ v2.

So by checking the angle between the two vanishing points, we can decide if a region can be the image of

a rectangle.Figure 22.8demonstrates the reconstruction based on the assumption of a rectangle The two sides of the cube are two rectangles (in fact squares) The angles between the vanishing points are 89.1◦ and 92.5◦, respectively The reconstruction is performed using only one image and six points The angle between the two planes is 89.2◦.

22.4.3 Further Readings

22.4.3.1 Symmetry and Vision

Symmetry is a strong vision cue in human vision perception and has been extensively discussed in psychol-ogy and cognition research [43, 45, 47] It has been noticed that symmetry is useful for face recognition [61, 62, 63]

22.4.3.2 Symmetry in Statistical Context

Besides geometric symmetry that has been discussed in this section, symmetry in the sense of statistics have also been studied and utilized Actually, the computational advantages of symmetry were first explored in the statistical context, such as the study of isotropic texture [17, 42, 69] It was the work of [14, 15, 42] that provided a wide range of efficient algorithms for recovering the orientation of a textured plane based on the assumption of isotropy or weak isotropy

22.4.3.3 Symmetry of Surfaces and Curves

While we only utilized symmetric points in this chapter, the symmetry of surfaces has also been exploited References [55, 72] used the surface symmetry for human face reconstruction References [19, 25] studied reconstruction of symmetric curves

Trang 5

22-18 Robotics and Automation Handbook

FIGURE 22.9 Top: A picture of the UAV in landing process Bottom left: An image of the landing pad viewed from

an on-board camera Bottom right: Extracted corner features from the image of the landing pad (Photo courtesy of

O Shankeria.)

to the results obtained from differential GPS and INS sensors (Figure 22.10, bottom) The overall error for this algorithm is less than 5 cm in distance and 4◦for rotation [51]

22.5.2 Automatic Symmetry Cell Detection, Matching and Reconstruction

In Section 22.4, we discussed symmetry-based reconstruction techniques from a single image Here we present a comprehensive example that performs symmetry-based reconstruction from multiple views

[3, 28] In this example, the image primitives are no longer point or line Instead we use the symmetry cells

as features The example includes three steps

22.5.2.1 Feature Extraction

By symmetry cell we mean a region in the image that is an image of a desired symmetric object in 3-D In

this example, the symmetric object we choose is rectangle So the symmetry cells are images of rectangles Detecting symmetry cells (rectangles) includes two steps First, we perform color-based segmentation on the image Then, for all the detected four-sided regions, we test if its two vanishing points are perpendicular

to each other and decide if it can be an image of a rectangle in 3-D space Figure 22.11 demonstrates this for the picture of an indoor scene In this picture, after color segmentation, all four-sided regions with reasonable sizes are detected and marked with darkened boundaries Then each four-sided polygon is passed through the vanishing point test For those polygons passing the test, we denote them as symmetry cells and recover the object frames with the symmetry-based algorithm Each individual symmetry cell is recovered

Trang 6

A Survey of Geometric Vision 22-21

FIGURE 22.13 Camera poses and cell structure recovered From left to right: top, side, and frontal views of the cells and camera poses.

between the second and third images For applications such as robotic mapping, the symmetric cells can serve as “landmarks.” The pose and motion of the robot can be easily derived using a similar scheme

22.5.3 Semiautomatic Building Mapping and Reconstruction

If a large number of similar objects is present, it is usually hard for the detection and matching scheme

in the above example to work properly For instance, for the symmetry of window complexes on the side

of a building shown in Figure 22.14, many ambiguous matches may occur In such cases, we need to take manual intervention to obtain a realistic 3-D reconstruction (e.g., see [28]) The techniques discussed

so far, however, help to minimize the amount of manual intervention For images in Figure 22.14, the user only needs to point out cells and provide the cell correspondence information The system will then automatically generate a consistent set of camera poses from the matched cells, as displayed in Figure 22.15

FIGURE 22.14 Five images used for reconstruction of a building For the first four image, we mark a few cells manually The last image is only used for extracting roof information.

Trang 7

A Survey of Geometric Vision 22-23

[4] Canny, J.F., A computational approach to edge detection, IEEE Trans Pattern Anal & Machine

Intelligence, 8(6):679–698, 1986.

[5] Caprile, B and Torre, V., Using vanishing points for camera calibration, Int J Computer Vision,

4(2):127–140, 1990

[6] Carlsson, S., Symmetry in perspective, in Proc Eur Conf Computer Vision, pp 249–263, 1998 [7] Corke, P.I., Visual Control of Robots: High-Performance Visual Servoing, Robotics and Mechatronics

Series, Research Studies Press, Somerset, England, 1996

[8] Costeira, J and Kanade, T., A multi-body factorization method for motion analysis, in Proc IEEE

Int Conf Computer Vision, pp 1071–1076, 1995.

[9] Das, A.K., Fierro, R., Kumar, V., Southball, B., Spletzer, J., and Taylor, C.J., Real-time vision-based

control of a nonholonomic mobile robot, in Proc IEEE Int Conf Robotics and Automation, pp 1714–

1719, 2002

[10] Faugeras, O., Three-Dimensional Computer Vision, MIT Press, Cambridge, MA, 1993.

[11] Faugeras, O., Stratification of three-dimensional vision: projective, affine, and metric

representa-tions, J Opt Soc Am., 12(3):465–84, 1995.

[12] Faugeras, O and Luong, Q.-T., Geometry of Multiple Images, MIT Press, Cambridge, MA, 2001.

[13] Fischler, M.A and Bolles, R.C., Random sample consensus: a paradigm for model fitting with

application to image analysis and automated cartography, Commn ACM, 24(6):381–395, 1981 [14] G¨arding, J., Shape from texture for smooth curved surfaces in perspective projection, J Math.

Imaging and Vision, 2(4):327–350, 1992.

[15] G¨arding, J., Shape from texture and contour by weak isotropy, J Artificial Intelligence, 64(2):243–297,

1993

[16] Geyer, C and Daniilidis, K., Properties of the catadioptric fundamental matrix, in Proc Eur Conf.

Computer Vision, Copenhagen, Denmark, Vol 2, pp 140–154, 2002.

[17] Gibson, J., The Perception of the Visual World, Houghton Mifflin, Boston, 1950.

[18] Gonzalez, R and Woods, R., Digital Image Processing, Addison-Wesley, Reading, MA, 1992.

[19] Gool, L.V., Moons, T., and Proesmans, M., Mirror and point symmetry under perspective

skew-ing, in Proc IEEE Int Conf Computer Vision & Pattern Recognition, San Francisco, pp 285–292,

1996

[20] Gruen, A and Huang, T., Calibration and Orientation of Cameras in Computer Vision., Information

Sciences, Springer-Verlag, Heidelberg, 2001

[21] Han, M and Kanade, T., Reconstruction of a scene with multiple linearly moving objects, in Int.

Conf Computer Vision & Pattern Recognition, Vol 2, pp 542–549, 2000.

[22] Harris, C and Stephens, M., A combined corner and edge detector, in Proc Alvey Conf., pp 189–192,

1988

[23] Hartley, R and Zisserman, A., Multiple View Geometry in Computer Vision., Cambridge, 2000.

[24] Heyden, A and Sparr, G., Reconstruction from calibrated cameras — a new proof of the Kruppa

Demazure theorem, J Math Imaging and Vision, pp 1–20, 1999.

[25] Hong, W., Yang, Y., Huang, K., and Ma, Y., On symmetry and multiple view geometry: Structure,

pose and calibration from a single image, Int J Computer Vision, in press.

[26] Horn, B., Robot Vision, MIT Press, Cambridge, MA, 1986.

[27] Huang, K., Fossum, R., and Ma, Y., Generalized rank conditions in multiple view geometry with

applications to dynamical scene, in Proc 6th Eur Conf Computer Vision, Copenhagen, Denmark,

Vol 2, pp 201–216, 2002

[28] Huang, K., Yang, A.Y., Hong, W., and Ma, Y., Large-baseline matching and reconstruction using

symmetry cells, in Proc of IEEE Int Conf Robotics and Automation, pp 1418–1423, 2004.

[29] Huang, T and Faugeras, O., Some properties of the E matrix in two-view motion estimation, IEEE

Trans Pattern Anal Mach Intelligence, 11(12):1310–12, 1989.

[30] Hutchinson, S., Hager, G.D., and Corke, P.I., A tutorial on visual servo control, IEEE Trans Robotics

and Automation, pp 651–670, 1996.

[31] Kanatani, K., Geometric Computation for Machine Vision, Oxford Science Publications, 1993.

Trang 8

22-24 Robotics and Automation Handbook

[32] Kruppa, E., Zur ermittlung eines objecktes aus zwei perspektiven mit innerer orientierung, Sitz.-Ber.

Akad Wiss., Math Naturw., Kl Abt IIa, 122:1939–1948, 1913.

[33] Longuet-Higgins, H.C., A computer algorithm for reconstructing a scene from two projections,

Nature, 293:133–135, 1981.

[34] Lucas, B.D and Kanade, T., An iterative image registration technique with an application to stereo

vision, in Proc Seventh Int J Conf Artificial Intelligence, pp 674–679, 1981.

[35] Ma, Y., Huang, K., and Kosecka, J., New rank deficiency condition for multiple view geometry of

line features, UIUC Technical Report, UILU-ENG 01-2209 (DC-201), May 8, 2001.

[36] Ma, Y., Huang, K., and Vidal, R., Rank deficiency of the multiple view matrix for planar features,

UIUC Technical Report, UILU-ENG 01-2209 (DC-201), May 18, 2001.

[37] Ma, Y., Huang, K., Vidal, R., Kosecka, J., and Sastry, S., Rank conditions of multiple view matrix in

multiple view geometry, Int J Computer Vision, 59:115–137, 2004.

[38] Ma, Y., Kosecka, J., and Huang, K., Rank deficiency condition of the multiple view matrix for mixed

point and line features, in Proc Asian Conf Computer Vision, Sydney, Australia, 2002.

[39] Ma, Y., Kosecka, J., and Sastry, S., Motion recovery from image sequences: discrete viewpoint vs

differential viewpoint, in Proc Eur Conf Computer Vision, Vol 2, pp 337–353, 1998.

[40] Ma, Y., Soatto, S., Kosecka, J., and Sastry, S., An Invitation to 3D Vision: From Images to Geometric

Models, Springer-Verlag, Heidelberg, 2003.

[41] Ma, Y., Vidal, R., Koseck´a, J., and Sastry, S., Kruppa’s equations revisited: its degeneracy,

renormal-ization and relations to cherality, in Proc Eur Conf Computer Vision, Dublin, Ireland, 2000.

[42] Malik, J., and Rosenholtz, R., Computing local surface orientation and shape from texture for curved

surfaces, Int J Computer Vision, 23:149–168, 1997.

[43] Marr, D., Vision: A Computational Investigation into the Human Representation and Processing of

Visual Information, W.H Freeman, San Fancisco, 1982.

[44] Maybank, S., Theory of Reconstruction from Image Motion, Springer Series in Information Sciences.

Springer-Verlag, Heidelberg, 1993

[45] Morales, D and Pashler, H., No role for colour in symmetry perception, Nature, 399:115–116, May

1999

[46] Nister, D., An efficient solution to the five-point relative pose problem, in CVPR, Madison, U.S.A.,

2003

[47] Plamer, S.E., Vision Science: Photons to Phenomenology, MIT Press, Cambridge, MA, 1999.

[48] Poelman, C.J and Kanade, T., A paraperspective factorization method for shape and motion recovery,

IEEE Trans Pattern Anal Mach Intelligence, 19(3):206–18, 1997.

[49] Quan, L and Kanade, T., A factorization method for affine structure from line correspondences, in

Proc Int Conf Computer Vision & Pattern Recognition, pp 803–808, 1996.

[50] Robert, L., Zeller, C., Faugeras, O., and Hebert, M., Applications of nonmetric vision to some visually

guided tasks, in Aloimonos, I., Ed., Visual Navigation, pp 89–135, 1996.

[51] Shakernia, O., Vidal, R., Sharp, C., Ma, Y., and Sastry, S., Multiple view motion estimation and

control for landing an unmanned aerial vehicle, in Proc Int Conf Robotics and Automation, 2002 [52] Shashua, A., Trilinearity in visual recognition by alignment, in Proc Eur Conf Computer Vision,

Springer-Verlag, Heidelberg, pp 479–484, 1994

[53] Shashua, A and Wolf, L On the structure and properties of the quadrifocal tensor, in Proc Eur.

Conf Computer Vision, vol I, Springer-Verlag, Heidelberg, pp 711–724, 2000.

[54] Shi, J and Tomasi, C., Good features to track, in Proc IEEE Conf Computer Vision and Pattern

Recognition, pp 593–600, 1994.

[55] Shimshoni, I., Moses, Y., and Lindenbaum, M., Shape reconstruction of 3D bilaterally symmetric

surfaces, Int J Computer Vision, 39:97–112, 2000.

[56] Spetsakis, M and Aloimonos, Y., Structure from motion using line correspondences, Int J Computer

Vision, 4(3):171–184, 1990.

[57] Sturm, P and Triggs, B., A factorizaton based algorithm for multi-image projective structure and

mo-tion, in Proc Eur Conf Computer Vision, IEEE Comput Soc Press, Washington, pp 709–720, 1996.

Trang 9

A Survey of Geometric Vision 22-25

[58] Thrun, S., Robotic mapping: a survey, CMU Technical Report, CMU-CS-02-111, February 2002 [59] Tomasi, C and Kanade, T., Shape and motion from image streams under orthography, Int J.

Computer Vision, 9(2):137–154, 1992.

[60] Triggs, B., Autocalibration from planar scenes, in Proc IEEE Conf Computer Vision and Pattern

Recognition, 1998.

[61] Troje, N.F and Bulthoff, H.H., How is bilateral symmetry of human faces used for recognition of

novel views, Vision Research, 38(1):79–89, 1998.

[62] Vetter, T and Poggio, T., Symmetric 3d objects are an easy case for 2d object recognition, Spatial

Vision, 8:443–453, 1994.

[63] Vetter, T., Poggio, T., and Bulthoff, H.H., The importance of symmetry and virtual views in

threedimensional object recognition, Current Biology, 4:18–23, 1994.

[64] Vidal, R., Ma, Y., Hsu, S., and Sastry, S., Optimal motion estimation from multiview normalized

epipolar constraint, in Proc IEEE Int Conf Computer Vision, Vancouver, Canada, 2001.

[65] Vidal, R and Oliensis, J., Structure from planar motion with small baselines, in Proc Eur Conf.

Computer Vision, Copenhagen, Denmark, pp 383–398, 2002.

[66] Vidal, R., Soatto, S., Ma, Y., and Sastry, S., Segmentation of dynamic scenes from the multibody

fundamental matrix, in Proc ECCV workshop on Vision and Modeling of Dynamic Scenes, 2002 [67] Weng, J., Huang, T.S., and Ahuja, N., Motion and Structure from Image Sequences, Springer-Verlag,

Heidelberg, 1993

[68] Weyl, H., Symmetry, Princeton University Press, 1952.

[69] Witkin, A.P., Recovering surface shape and orientation from texture, J Artif Intelligence, 17:17–45,

1988

[70] Yang, A.Y., Hong, W., and Ma, Y., Structure and pose from single images of symmetric objects with

applications in robot navigationi, in Proc Int Conf Robotics and Automation, Taipei, 2003 [71] Zhang, Z., A flexible new technique for camera calibration, Microsoft Technical Report

MSR-TR-98-71, 1998.

[72] Zhao, W.Y and Chellappa, R., Symmetric shape-from-shading using self-ratio image, Int J.

Computer Vision, 45(1):55–75, 2001.

[73] Zisserman, A., Mukherjee, D.P., and Brady, J.M., Shape from symmetry-detecting and exploiting

symmetry in affine images, Phil Trans Royal Soc London A, 351:77–106, 1995.

[74] Zisserman, A., Rothwell, C.A., Forsyth, D.A., and Mundy, J.L., Extracting projective structure from

single perspective views of 3d point sets, in Proc IEEE Int Conf Computer Vision, 1993.

Trang 10

Haptic Interface to Virtual Environments

R Brent Gillespie

University of Michigan

23.1 Introduction Related Technologies • Some Classifications • Applications

Specifications Characterizing the Human User • Specification and Design

of the Haptic Interface • Design of the Virtual Environment 23.3 Human Haptics

Some Observations • Some History in Haptics

• Anticipatory Control 23.4 Haptic Interface Compensation Based on System Models • Passivity Applied

to Haptic Interface 23.5 Virtual Environment Collision Detector • Interaction Calculator • Forward Dynamics Solver

23.6 Concluding Remarks

23.1 Introduction

A haptic interface is a motorized and instrumented device that allows a human user to touch and manipulate objects within a virtual environment As shown inFigure 23.1,the haptic interface intervenes between the user and virtual environment, making a mechanical contact with the user and an electrical connection with the virtual environment At the mechanical contact, force and motion are either measured by sensors

or driven by motors At the electrical connection, signals are transmitted that represent the force and

motion occurring at a simulated mechanical contact with a virtual object A controller within the haptic

interface processes the various signals and attempts to ensure that the force and motion signals describing the mechanical contact track or in some sense follow the force and motion signals at the simulated contact The motors on the haptic interface device provide the authority by which the controller ensures tracking

A well-designed haptic device and controller will cause the behaviors at the mechanical and simulated contacts to be “close” to one another In so doing it will extend the mechanical sensing and manipulation capabilities of a human user into the virtual environment

23.1.1 Related Technologies

Naturally, the field of haptic interface owes much to the significantly older field of telerobotics A telerobot

intervenes between a human user and a remote physical environment rather than between a user and a computationally mediated virtual environment In addition to a master manipulator (to which the haptic

... class="page_container" data-page="8">

22 -2 4 Robotics and Automation Handbook< /i>

[ 32] Kruppa, E., Zur ermittlung eines objecktes aus zwei perspektiven mit innerer orientierung, Sitz.-Ber.... scene, in Proc 6th Eur Conf Computer Vision, Copenhagen, Denmark,

Vol 2, pp 20 1? ?21 6, 20 02

[28 ] Huang, K., Yang, A.Y., Hong, W., and Ma, Y., Large-baseline matching and reconstruction... CMU-CS-0 2- 1 11, February 20 02 [59] Tomasi, C and Kanade, T., Shape and motion from image streams under orthography, Int J.

Computer Vision, 9 (2) :137–154, 19 92.

[60 ]

Ngày đăng: 10/08/2014, 02:20

🧩 Sản phẩm bạn có thể quan tâm