1. Trang chủ
  2. » Giáo án - Bài giảng

handbook of augmented reality

769 1,4K 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Handbook of Augmented Reality
Tác giả Borko Furht
Trường học Florida Atlantic University
Chuyên ngành Computer Science
Thể loại Sách tham khảo
Năm xuất bản 2011
Thành phố Boca Raton
Định dạng
Số trang 769
Dung lượng 19,83 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Zhuming Ai Naval Research Laboratory, Washington, DC, USAFrank Angermann Metaio, Munich, Germany,Frank.Angermann@metaio.com Shu’nsuke Asai Shimane University, Shimane, Japan Yohan Baillo

Trang 4

Handbook of Augmented Reality

123

Trang 5

Borko Furht

Department of Computer and Electrical Engineering

and Computer Science

Florida Atlantic University

Springer New York Dordrecht Heidelberg London

Library of Congress Control Number: 2011933565

c

 Springer Science+Business Media, LLC 2011

All rights reserved This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,

NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis Use in connection with any form of information storage and retrieval, electronic adaptation, computer software,

or by similar or dissimilar methodology now known or hereafter developed is forbidden.

The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject

to proprietary rights.

Printed on acid-free paper

Springer is part of Springer Science+Business Media ( www.springer.com )

Trang 8

Augmented Reality (AR) refers to a live view of physical real world environmentwhose elements are merged with augmented computer-generated images creating

a mixed reality The augmentation is typically done in real time and in semanticcontext with environmental elements By using the latest AR techniques andtechnologies, the information about the surrounding real world becomes interactiveand digitally usable

The objective of this Handbook is to provide comprehensive guidelines onthe current and future trends in augmented reality technologies and applications.This Handbook is carefully edited book – contributors are worldwide experts

in the field of augmented reality and its applications The Handbook AdvisoryBoard, comprised of 11 researchers and practitioners from academia and industry,helped in reshaping the Handbook and selecting the right topics and creative andknowledgeable contributors

The Handbook comprises of two parts, which consist of 33 chapters The first part

on Technologies includes articles dealing with fundamentals of augmented reality,

augmented reality technologies, visualization techniques, head-mounted projectiondisplays, evaluation of AR systems, mobile AR systems, and other innovative ARconcepts

The second part on Applications includes various articles on AR applications

in-cluding applications in psychology, medical education, edutainment, reality games,rehabilitation engineering, automotive safety, product development and manufac-turing, military applications, exhibition and entertainment, geographic informationsystems, and others

With the dramatic growth of augmented reality and its applications, this book can be the definitive resource for persons working in this field as researchers,scientists, programmers, engineers, and users The book is intended for a wide vari-ety of people including academicians, designers, developers, educators, engineers,practitioners, researchers, and graduate students This book can also be beneficial forbusiness managers, entrepreneurs, and investors The book can have a great potential

Hand-to be adopted as a textbook in current and new courses on Augmented Reality

vii

Trang 9

The main features of this Handbook can be summarized as:

1 The Handbook describes and evaluates the current state-of-the-art in the field ofaugmented reality

2 The book presents current trends and concepts of augmented reality, technologiesand techniques, AR devices, interfaces, tools, and systems applied in AR, as well

as current and future applications

3 Contributors to the Handbook are the leading researchers from academia andpractitioners from industry

We would like to thank the authors for their contributions Without their expertiseand effort this Handbook would never come to fruition Springer editors and staffalso deserve our sincere recognition for their support throughout the project

Trang 10

Borko Furht is a professor and chairman ofthe Department of Computer and ElectricalEngineering and Computer Science at FloridaAtlantic University (FAU) in Boca Raton,Florida He is also Director of the NSF-sponsored Industry/University CooperativeResearch Center on Advanced Knowledge En-ablement Before joining FAU, he was a vicepresident of research and a senior director ofdevelopment at Modcomp (Ft Lauderdale),

a computer company of Daimler Benz, many, a professor at University of Miami

Ger-in Coral Gables, Florida, and a senior searcher in the Institute Boris Kidric-Vinca,Yugoslavia Professor Furht received Ph.D.degree in electrical and computer engineeringfrom the University of Belgrade His current research is in multimedia systems,video coding and compression, 3D video and image systems, wireless multimedia,and Internet, cloud computing, and social networks He is presently PrincipalInvestigator and Co-PI of several multiyear, multimillion dollar projects includingNSF PIRE project and NSF High-Performance Computing Center He is the author

re-of numerous books and articles in the areas re-of multimedia, computer architecture,real-time computing, and operating systems He is a founder and editor-in-chief

of the Journal of Multimedia Tools and Applications (Springer) He has received

several technical and publishing awards, and has consulted for many high-techcompanies including IBM, Hewlett-Packard, Xerox, General Electric, JPL, NASA,Honeywell, and RCA He has also served as a consultant to various collegesand universities He has given many invited talks, keynote lectures, seminars, andtutorials He serves as Chairman and Director on the Board of Directors of severalhigh-tech companies

ix

Trang 12

Part I Technologies

1 Augmented Reality: An Overview 3Julie Carmigniani and Borko Furht

2 New Augmented Reality Taxonomy: Technologies

and Features of Augmented Environment 47Olivier Hugues, Philippe Fuchs, and Olivier Nannipieri

3 Visualization Techniques for Augmented Reality 65Denis Kalkofen, Christian Sandor, Sean White, and Dieter

Schmalstieg

4 Mobile Augmented Reality Game Engine 99Jian Gu and Henry B.L Duh

5 Head-Mounted Projection Display Technology and Applications 123

Hong Hua, Leonard D Brown, and Rui Zhang

6 Wireless Displays in Educational Augmented

Reality Applications 157

Hannes Kaufmann and Mathis Csisinko

7 Mobile Projection Interfaces for Augmented Reality Applications 177

Markus L¨ochtefeld, Antonio Kr¨uger, and Michael Rohs

8 Interactive Volume Segmentation and Visualization

in Augmented Reality 199

Takehiro Tawara

9 Virtual Roommates: Sampling and Reconstructing

Presence in Multiple Shared Spaces 211

Andrei Sherstyuk and Marina Gavrilova

xi

Trang 13

10 Large Scale Spatial Augmented Reality for Design

and Prototyping 231

Michael R Marner, Ross T Smith, Shane R Porter, Markus

M Broecker, Benjamin Close, and Bruce H Thomas

11 Markerless Tracking for Augmented Reality 255

Jan Herling and Wolfgang Broll

12 Enhancing Interactivity in Handheld AR Environments 273

Masahito Hirakawa, Shu’nsuke Asai, Kengo Sakata, Shuhei

Kanagu, Yasuhiro Sota, and Kazuhiro Koyama

13 Evaluating Augmented Reality Systems 289

Andreas D¨unser and Mark Billinghurst

14 Situated Simulations Between Virtual Reality and Mobile

Augmented Reality: Designing a Narrative Space 309

Gunnar Liestøl

15 Referencing Patterns in Collaborative Augmented Reality 321

Jeff Chastine

16 QR Code Based Augmented Reality Applications 339

Tai-Wei Kan, Chin-Hung Teng, and Mike Y Chen

17 Evolution of a Tracking System 355

Sebastian Lieberknecht, Quintus Stierstorfer, Georg Kuschk,

Daniel Ulbricht, Marion Langer, and Selim Benhimane

18 Navigation Techniques in Augmented and Mixed Reality:

Crossing the Virtuality Continuum 379

Raphael Grasset, Alessandro Mulloni, Mark Billinghurst,

and Dieter Schmalstieg

19 Survey of Use Cases for Mobile Augmented Reality Browsers 409

Tia Jackson, Frank Angermann, and Peter Meier

Part II Applications

20 Augmented Reality for Nano Manipulation 435

Ning Xi, Bo Song, Ruiguo Yang, and King Lai

21 Augmented Reality in Psychology 449

M Carmen Juan and David P´erez

22 Environmental Planning Using Augmented Reality 463

Jie Shen

23 Mixed Reality Manikins for Medical Education 479

Andrei Sherstyuk, Dale Vincent, Benjamin Berg,

and Anton Treskunov

Trang 14

and Pervasive Augmented Reality Games 541

Pedro Ferreira and Fernando Boavida

27 3D Medical Imaging and Augmented Reality

for Image-Guided Surgery 589

Hongen Liao

28 Augmented Reality in Assistive Technology

and Rehabilitation Engineering 603

S.K Ong, Y Shen, J Zhang, and A.Y.C Nee

29 Using Augmentation Techniques for Performance

Evaluation in Automotive Safety 631

Jonas Nilsson, Anders C.E ¨Odblom, Jonas Fredriksson,

and Adeel Zafar

30 Augmented Reality in Product Development and Manufacturing 651

S.K Ong, J Zhang, Y Shen, and A.Y.C Nee

31 Military Applications of Augmented Reality 671

Mark A Livingston, Lawrence J Rosenblum, Dennis G

Brown, Gregory S Schmidt, Simon J Julier, Yohan Baillot,

J Edward Swan II, Zhuming Ai, and Paul Maassel

32 Augmented Reality in Exhibition and Entertainment

for the Public 707

Yetao Huang, Zhiguo Jiang, Yue Liu, and Yongtian Wang

33 GIS and Augmented Reality: State of the Art and Issues 721

Olivier Hugues, Jean-Marc Cieutat, and Pascal Guitton

Index 741

Trang 16

Borko Furht, Florida Atlantic University, Boca Raton, Florida, USA

Members:

Mark Billinghurst, University of Canterbury, New Zealand

Henry D.H Duh, National University of Singapore, Singapore

Hong Hua, The University of Arizona, Tucson, Arizona, USA

Denis Kalkofen, Graz University of Technology, Graz, Austria

Hannes Kaufmann, Vienna University of Technology, Vienna, Austria

Hongen Liao, The University of Tokyo, Tokyo, Japan

Sebastian Lieberknecht, Research, metaio GmbH, Munich, Germany

Mark A Livingston, Navy Research Laboratory, Washington DC, USA

Andrei Sherstyuk, Avitar Reality, Honolulu, Hawaii, USA

Bruce Thomas, The University of South Australia, Mawson Lakes, AustraliaNing Xi, Michigan State University, East Lansing, MI, USA

xv

Trang 18

Zhuming Ai Naval Research Laboratory, Washington, DC, USA

Frank Angermann Metaio, Munich, Germany,Frank.Angermann@metaio.com

Shu’nsuke Asai Shimane University, Shimane, Japan

Yohan Baillot Naval Research Laboratory, Washington, DC, USA

Francesca Beatrice Instituto Universitario de Autom´atica e Inform´atica Industrial,

Universidad Polit´ecnica de Valencia, Valencia, Spain

Selim Benhimane Research, metaio GmbH, Munich, Germany

Benjamin Berg SimTiki Simulation Center, University of Hawaii, Honolulu HI,

USA,bwberg@hawaii.edu

Mark Billinghurst The Human Interface Technology Laboratory, New Zealand

(HIT Lab NZ), The University of Canterbury, Christchurch, New Zealand,

mark.billinghurst@canterbury.ac.nz

Lisa Blum Collaborative Virtual and Augmented Environments, Fraunhofer FIT,

Schloss Birlinghoven, 53754 Sankt Augustin, Germany,

lisa.blum@fit.fraunhofer.de

Fernando Boavida Centre for Informatics and Systems, University of Coimbra,

Portugal

Markus M Broecker University of South Australia, Wearable Computer

Labora-tory, Mawson Lakes, Australia,markus.broecker@unisa.edu.au

Wolfgang Broll Collaborative Virtual and Augmented Environments, Fraunhofer

FIT, Schloss Birlinghoven, 53754 Sankt Augustin, Germany,

wolfgang.broll@fit.fraunhofer.de

Department of Virtual Worlds and Digital Games, Ilmenau University ofTechnology, Ilmenau, Germany,wolfgang.broll@tu-ilmenau.de

xvii

Trang 19

Leonard D Brown Department of Computer Science, The University of Arizona,

Tucson, Arizona, USA

Dennis G Brown Naval Research Laboratory, Washington, DC, USA

Julie Carmigniani Department of Computer and Electrical Engineering and

Computer Sciences, Florida Atlantic University, Boca Raton, Florida, USA,jcarmign@fau.edu

Jeff Chastine Department of Computing and Software Engineering, Southern

Polytechnic State University, Marietta, Georgia, USA,jchastin@spsu.edu

Mike Y Chen Yuan Ze University, Taiwan,7533967@gmail.com

Jean-Marc Cieutat ESTIA Recherche, Bidart, France,j.cieutat@estia.fr

Benjamin Close University of South Australia, Wearable Computer Laboratory,

Mawson Lakes, Australia,benjamin.close@clearchain.com

Mathis Csisinko Institute of Software Technology and Interactive Systems, Vienna

University of Technology, Vienna, Austria

Henry B.L Duh Department of Electrical and Computer Engineering/Interactive

and Digital Media Institute, National University of Singapore, Singapore,duhbl@acm.org

Andreas D ¨unser The Human Interface Technology Laboratory, New Zealand

(HIT Lab NZ), The University of Canterbury, Christchurch, New Zealand,

andreas.duenser@canterbury.ac.nz

J Edward Swan II Naval Research Laboratory, Washington, DC, USA

Pedro Ferreira Centre for Informatics and Systems, University of Coimbra,

Portugal,pmferr@dei.uc.pt

Jonas Fredriksson Chalmers University of Technology, Department of Signals

and Systems, Gothenburg, Sweden,jonas.fredriksson@chalmers.se

Philippe Fuchs Virtual Reality and Augmented Reality Team, ´Ecole des MinesParisTech, Paris, France,philippe.fuchs@ensmp.fr

Borko Furht Department of Computer and Electrical Engineering and Computer

Science, Florida Atlantic University, Boca Raton, Florida, USA,bfurht@fau.edu

Marina Gavrilova University of Calgary, Canada,mgavrilo@ucalgary.ca

Raphael Grasset HIT Lab NZ, University of Canterbury, New Zealand,

Trang 20

Hong Hua College of Optical Sciences, The University of Arizona, Tucson,

Arizona, USA,hhua@optics.arizona.edu

Yetao Huang Beihang University, Beijing, China,6666@bit.edu.cn

Olivier Hugues ESTIA Recherche, MaxSea, LaBRI, Bidart, France,

o.hugues@net.estia.fr

Tia Jackson Metaio, Munich, Germany,tia.jackson@metaio.com

Zhiguo Jiang Beihang University, Beijing, China

M Carmen Juan Instituto Universitario de Autom´atica e Inform´atica Industrial,

Universitat Polit`ecnica de Val`encia, C/Camino de Vera, s/n, 46022-Valencia, Spain,mcarmen@dsic.upv.es

Simon J Julier Naval Research Laboratory, Washington, DC, USA

Denis Kalkofen Institute for Computer Graphics and Vision, Graz University of

Technology, Graz, Austria,kalkofen@icg.tugraz.at

Tai-Wei Kan Graduate Institute of Networking and Multimedia, National Taiwan

University, Taiwan,7533967@gmail.com;d99944001@ntu.edu.tw

Shuhei Kanagu Roots Co Ltd, Shimane, Japan,kanagu@roots.selfip.com

Hannes Kaufmann Institute of Software Technology and Interactive Systems,

Vienna University of Technology, Vienna, Austria,kaufmann@ims.tuwien.ac.at

Kazuhiro Koyama Roots Co Ltd, Shimane, Japan,koyama@roots.selfip.com

Antonio Kr ¨uger German Research Center for Artificial Intelligence DFKI,

University of Saarland, Saarbr¨ucken, Germany

Georg Kuschk Research, metaio GmbH, Munich, Germany

King Lai Department of Electrical and Computer Engineering, Michigan State

University, East Lansing, MI, USA

Marion Langer Research, metaio GmbH, Munich, Germany

Hongen Liao The University of Tokyo, Tokyo, Japan,liao@bmpe.t.u-tokyo.ac.jp

Sebastian Lieberknecht Research, metaio GmbH, Munich, Germany,

Sebastian.Lieberknecht@metaio.com

Trang 21

Gunnar Liestøl Department of Media & Communication, University of Oslo,

Norway,gunnar.liestol@media.uio.no

Yue Liu Beijing Institute of Technology, Beijing, China

Mark A Livingston Naval Research Laboratory, Washington, DC, USA,

mark.livingston@nrl.navy.mil

Markus L¨ochtefeld German Research Center for Artificial Intelligence DFKI,

University of Saarland, Saarbr¨ucken, Germany,markus.loechtefeld@dfki.de

Paul Maassel Naval Research Laboratory, Washington, DC, USA

Michael R Marner University of South Australia, Wearable Computer

Labora-tory, Mawson Lakes, Australia,michael.marner@unisa.edu.au

Peter Meier Metaio, Munich, Germany

Alessandro Mulloni Institute for Computer Graphics and Vision, Graz University

of Technology, Austria,mulloni@icg.tugraz.at

Olivier Nannipieri Universit´e du Sud, Toulon, and Universit´e de la M´editerran´ee,

Marseille, France,fk.olivier@mac.com

A.Y.C Nee Department of Mechanical Engineering, National University of

Singapore, Singapore,mpeneeyc@nus.edu.sg

Jonas Nilsson Vehicle Dynamics and Active Safety Centre, Volvo Car

Corporation, Gothenburg, Sweden

Department of Signals and Systems, Chalmers University of Technology,Gothenburg, Sweden,jnilss94@volvocars.com

Anders C.E ¨ Odblom Volvo Car Corporation, Gothenburg, Sweden,

aodblom1@volvocars.com

S.K Ong Department of Mechanical Engineering, National University of

Singapore, Singapore,mpeongsk@nus.edu.sg

Leif Oppermann Collaborative Virtual and Augmented Environments, Fraunhofer

FIT, Schloss Birlinghoven, 53754 Sankt Augustin, Germany,

leif.oppermann@fit.fraunhofer.de

David P´erez Instituto de Investigaci´on en Bioingenier´ıa y Tecnolog´ıa Orientada al

Ser Humano, Universitat Polit`ecnica de Val`encia, Valencia, Spain

Shane R Porter University of South Australia, Wearable Computer Laboratory,

Mawson Lakes, Australia,shane.porter@unisa.edu.au

Michael Rohs Department of Applied Informatics and Media Informatics,

Ludwig-Maximilians-University (LMU) Munich, Munich, Germany

Lawrence J Rosenblum Naval Research Laboratory, Washington, DC, USA

Trang 22

Gregory S Schmidt Naval Research Laboratory, Washington, DC, USA

Jie Shen School of Computer Science and Engineering, University of Electronic

Science and Technology of China, Chengdu, China,zeropoint17@hotmail.com

Y Shen Mechanical Engineering Department, National University of Singapore,

Singapore

Andrei Sherstyuk Avatar Reality Inc., Honolulu HI, USA,

andrei@avatar-reality.com

Ross T Smith University of South Australia, Wearable Computer Laboratory,

Mawson Lakes, Australia,ross@r-smith.net

Bo Song Department of Electrical and Computer Engineering, Michigan State

University, East Lansing, MI, USA

Yasuhiro Sota Roots Co Ltd, Shimane, Japan,sota@roots.selfip.com

Quintus Stierstorfer Research, metaio GmbH, Munich, Germany

Takehiro Tawara Riken, 2-1 Hirosawa Wako-Shi, 351-0198 Saitama, Japan,

takehirotwr@riken.jp

Chin-Hung Teng Department of Information and Communication, Yuan Ze

University, Taiwan,7533967@gmail.com

Bruce H Thomas University of South Australia, Wearable Computer Laboratory,

Mawson Lakes, Australia,bruce.thomas@unisa.edu.au

Anton Treskunov Samsung Information Systems America Inc (SISA), San Jose,

CA, USA,anton.t@sisa.samsung.com

Daniel Ulbricht Research, metaio GmbH, Munich, Germany

Dale Vincent Internal Medicine Program, Tripler Army Medical Center (TAMC),

Honolulu HI, USA,dale.vincent@amedd.army.mil

Yongtian Wang Beijing Institute of Technology, Beijing, China,6666@bit.edu.cn

Richard Wetzel Collaborative Virtual and Augmented Environments, Fraunhofer

FIT, Schloss Birlinghoven, 53754 Sankt Augustin, Germany,

richard.wetzel@fit.fraunhofer.de

Sean White Nokia Research Center, Santa Monica, CA, USA,

sean.white@nokia.com

Trang 23

Ning Xi Department of Electrical and Computer Engineering, Michigan State

University, East Lansing, MI, USA,xin@egr.msu.edu

Ruiguo Yang Department of Electrical and Computer Engineering, Michigan State

University, East Lansing, MI, USA

Adeel Zafar Volvo Car Corporation, Gothenburg, Sweden,

azafar3@volvocars.com

J Zhang Department of Mechanical Engineering, National University of

Singapore, Singapore

Rui Zhang College of Optical Sciences, The University of Arizona, Tucson,

Arizona, USA,hhua@optics.arizona.edu

Trang 26

1 Introduction

We define Augmented Reality (AR) as a real-time direct or indirect view of a

phys-ical real-world environment that has been enhanced/augmented by adding virtual

computer-generated information to it [1] AR is both interactive and registered in3D as well as combines real and virtual objects Milgram’s Reality-Virtuality Con-tinuum is defined by Paul Milgram and Fumio Kishino as a continuum that spansbetween the real environment and the virtual environment comprise AugmentedReality and Augmented Virtuality (AV) in between, where AR is closer to the realworld and AV is closer to a pure virtual environment, as seen in Fig.1.1[2].Augmented Reality aims at simplifying the user’s life by bringing virtualinformation not only to his immediate surroundings, but also to any indirect view

of the real-world environment, such as live-video stream AR enhances the user’sperception of and interaction with the real world While Virtual Reality (VR)technology or Virtual Environment as called by Milgram, completely immerses

users in a synthetic world without seeing the real world, AR technology augments

the sense of reality by superimposing virtual objects and cues upon the real world

in real time Note that, as Azuma et al [3], we do not consider AR to be restricted

to a particular type of display technologies such as head-mounted display (HMD),nor do we consider it to be limited to the sense of sight AR can potentiallyapply to all senses, augmenting smell, touch and hearing as well AR can also

be used to augment or substitute users’ missing senses by sensory substitution,such as augmenting the sight of blind users or users with poor vision by the use

of audio cues, or augmenting hearing for deaf users by the use of visual cues

J Carmigniani (  )

Department of Computer and Electrical Engineering and Computer Sciences,

Florida Atlantic University, Boca Raton, Florida, USA

e-mail: jcarmign@fau.edu

B Furht (ed.), Handbook of Augmented Reality, DOI 10.1007/978-1-4614-0064-6 1,

© Springer Science+Business Media, LLC 2011

3

Trang 27

Fig 1.1 Milgram’s

reality-virtuality

continuum [ 1 ]

Azuma et al [3] also considered AR applications that require removing real objects

from the environment, which are more commonly called mediated or diminished

reality, in addition to adding virtual objects Indeed, removing objects from the real

world corresponds to covering the object with virtual information that matches thebackground in order to give the user the impression that the object is not there.Virtual objects added to the real environment show information to the user that theuser cannot directly detect with his senses The information passed on by the virtualobject can help the user in performing daily-tasks work, such as guiding workersthrough electrical wires in an aircraft by displaying digital information through aheadset The information can also simply have an entertainment purpose, such asWikitude or other mobile augmented reality There are many other classes of ARapplications, such as medical visualization, entertainment, advertising, maintenanceand repair, annotation, robot path planning, etc

The first appearance of Augmented Reality (AR) dates back to the 1950s whenMorton Heilig, a cinematographer, thought of cinema is an activity that wouldhave the ability to draw the viewer into the onscreen activity by taking in allthe senses in an effective manner In 1962, Heilig built a prototype of his vision,which he described in 1955 in “The Cinema of the Future,” named Sensorama,which predated digital computing [4] Next, Ivan Sutherland invented the headmounted display in 1966 (Fig.1.2) In 1968, Sutherland was the first one to create

an augmented reality system using an optical see-through head-mounted display[5] In 1975, Myron Krueger creates the Videoplace, a room that allows users tointeract with virtual objects for the first time Later, Tom Caudell and David Mizellfrom Boeing coin the phrase Augmented Reality while helping workers assemblewires and cable for an aircraft [1] They also started discussing the advantages

of Augmented Reality versus Virtual Reality (VR), such as requiring less powersince fewer pixels are needed [5] In the same year, L.B Rosenberg developed one

of the first functioning AR systems, called Virtual Fixtures and demonstrated itsbenefit on human performance while Steven Feiner, Blair MacIntyre and DoreeSeligmann presented the first major paper on an AR system prototype namedKARMA [1] The reality virtuality continuum seen in Fig.1.1is not defined until

1994 by Paul Milgram and Fumio Kishino as a continuum that spans from thereal environment to the virtual environment AR and AV are located somewhere

Trang 28

in between with AR being closer to the real world environment and AV being closer

to the virtual environment In 1997, Ronald Azuma writes the first survey in ARproviding a widely acknowledged definition of AR by identifying it as combiningreal and virtual environment while being both registered in 3D and interactive inreal time [5] The first outdoor mobile AR game, ARQuake, is developed by BruceThomas in 2000 and demonstrated during the International Symposium on WearableComputers In 2005, the Horizon Report [6] predicts that AR technologies willemerge more fully within the next 4–5 years; and, as to confirm that prediction,camera systems that can analyze physical environments in real time and relatepositions between objects and environment are developed the same year This type

of camera system has become the basis to integrate virtual objects with reality in

AR systems In the following years, more and more AR applications are developedespecially with mobile applications, such as Wikitude AR Travel Guide launched

in 2008, but also with the development of medical applications in 2007 Nowadays,with the new advances in technology, an increasing amount of AR systems andapplications are produced, notably with MIT 6th sense prototype and the release ofthe iPad 2 and its successors and competitors, notably the Eee Pad, and the iPhone 4,which promises to revolutionize mobile AR

Trang 29

3 Augmented Reality Technologies

Computer vision renders 3D virtual objects from the same viewpoint from whichthe images of the real scene are being taken by tracking cameras Augmentedreality image registration uses different method of computer vision mostly related

to video tracking These methods usually consist of two stages: tracking andreconstructing/recognizing First, fiducial markers, optical images, or interest pointsare detected in the camera images Tracking can make use of feature detection,edge detection, or other image processing methods to interpret the camera images

In computer vision, most of the available tracking techniques can be separated intwo classes: feature-based and model-based [7] Feature-based methods consist ofdiscovering the connection between 2D image features and their 3D world framecoordinates [8] Model-based methods make use of model of the tracked objects’features such as CAD models or 2D templates of the item based on distinguishablefeatures [7] Once a connection is made between the 2D image and 3D world frame,

it is possible to find the camera pose by projecting the 3D coordinates of the featureinto the observed 2D image coordinates and by minimizing the distance to theircorresponding 2D features The constraints for camera pose estimation are mostoften determined using point features The reconstructing/recognizing stage usesthe data obtained from the first stage to reconstruct a real world coordinate system.Assuming a calibrated camera and a perspective projection model, if a point hascoordinates(x,y,z) T in the coordinate frame of the camera, its projection onto theimage plane is(x/z,y/z,1) T

In point constraints, we have two principal coordinate systems, as illustrated inFig.1.3, the world coordinate system W and the 2D image coordinate system Let

p i (x i ,y i ,z i)T , where i = 1, ,n, with n ≥ 3, be a set of 3D non-collinear reference

points in the world frame coordinate and q i (x 

i ,y 

i ,z 

i)T be the corresponding

camera-space coordinates, p i and q iare related by the following transformation:

are a rotation matrix and a translation vector, respectively

Let the image point hi(ui,vi,1)Tbe the projection of pion the normalized image

plane The collinearity equation establishing the relationship between hi and p i

using the camera pinhole is given by:

r T p i +t z

Trang 30

Fig 1.3 Point constraints for the camera pose problem adapted from [9 ]

The image space error gives a relationship between 3D reference points, theircorresponding 2D extracted image points, and the camera pose parameters, andcorresponds to the point constraints [9] The image space error is given as follow:

are the observed image points

Some methods assume the presence of fiducial markers in the environment orobject with known 3D geometry, and make use of those data Others have the scene3D structure pre-calculated beforehand, such as Huang et al.’s device AR-View [10];however, the device will have to be stationary and its position known If the entirescene is not known beforehand, Simultaneous Localization And Mapping (SLAM)technique is used for mapping fiducial markers or 3D models relative positions Inthe case when no assumptions about the 3D geometry of the scene can be made,Structure from Motion (SfM) method is used SfM method can be divided into twoparts: feature point tracking and camera parameter estimation

Tracking methods in AR depend mostly on the type of environment the ARdevice will be introduced to as well as the type of AR system The environmentmight be indoor, outdoor or a combination of both In the same way, the systemmight be mobile or static (have a fixed-position) For example, if the AR device is a

Trang 31

fixed-position device for an outdoor real environment, such as Huang et al.’s deviceAR-View [10], the developers can use mechanical tracking since the movements to

be tracked will all be mechanical, as the position of the device is known This type

of environment and system makes tracking of the environment for augmenting thesurroundings easier On the other hand, if the AR device is mobile and designed for

an outdoor environment, tracking becomes much harder and different techniquesoffer some advantages and disadvantages For example, Nilsson et al [11] built

a pedestrian detection system for automotive collision avoidance using AR Theirsystem is mobile and outdoor For a camera moving in an unknown environment,the problem for computer vision is to reconstruct both the motion of the cameraand the structure of the scene using the image and additional sensor data sequences

In this case, since no assumption about the 3D geometry of the scene can be made,SfM method is used for reconstructing the scene

Developers also have the choice to make use of existing AR libraries, such asthe ARToolKit ARToolKit, which was developed in 1999 by Hirokazu Kato fromthe Nara Institute of Science and Technology and was released by the University

of Washington HIT Lab, is a computer vision tracking library that allows the user

to create augmented reality applications [12] It uses video tracking capabilities tocalculate in real time the real camera position and orientation relative to physicalmarkers Once the real camera position is known, a virtual camera can be placed atthe same exact position and 3D computer graphics model can be drawn to overlaythe markers The extended version of ARToolKit is ARToolKitPlus, which addedmany features over the ARToolKit, notably class-based APIs; however, it is nolonger being developed and already has a successor: Studierstube Tracker

Studierstube Tracker’s concepts are very similar to ARToolKitPlus; however, itscode base is completely different and it is not an open source, thus not availablefor download It supports mobile phone, with Studierstube ES, as well as PCs,making its memory requirements very low (100KB or 5–10% of ARToolKitPlus)and processing very fast (about twice as fast as ARToolKitPlus on mobile phonesand about 1 ms per frame on a PC) [13] Studierstube Tracker is highly modular;developers can extend it in anyway by creating new features for it When firstpresenting Studierstube in [13], the designers had in mind a user interface that

“uses collaborative augmented reality to bridge multiple user interface dimensions:Multiple users, contexts, and locales as well as applications, 3D-windows, hosts,display platforms, and operating systems.” More information about Studierstubecan be found at [13–15]

Although visual tracking now has the ability to recognize and track a lot of things,

it mostly relies on other techniques such as GPS and accelerometers For example,for a computer to detect and recognize a car it is very hard The surface of mostcars is both shiny and smooth and most of the feature points come from reflectionsand thus are not relevant for pose estimation and even sometimes recognition [16].The few stable features that one can hope to recognize, such as the windows corners

or wheels, are extremely difficult to match due to reflection and transparent parts.While this example is a bit extreme, it shows the difficulties and challenges faced bycomputer vision with most objects that have irregular shape, such as food, flowers,and most objects of art

Trang 32

A recent new approach for advances in visual tracking has been to study howthe human brain recognizes objects, also called the Human Vision System (HVS),

as it is possible for humans to recognize an infinite number of objects and persons

in fractions of seconds If the way of recognizing things by the human brain can bemodeled, computer vision will be able to handle the challenges it is currently facingand keep moving forward

Trang 33

Fig 1.5 Handheld displays from [18 ]

both the “real part” of the augmented scene and the virtual objects with unmatchedresolution, while the optical-see-through employs a half-silver mirror technology

to allow views of physical world to pass through the lens and graphically overlayinformation to be reflected in the user’s eyes The scene as well as the real world isperceived more naturally than at the resolution of the display On the other hand, invideo-see-through systems, augmented view is already composed by the computerand allows much more control over the result Thus, control over the timing of thereal scene can be achieved by synchronizing the virtual image with the scene beforedisplaying it while in an optical-see-through application, the view of the real worldcannot be delayed, so the time lag introduced in the system by the graphics andimage processing is perceived by the user This results in image that may not appear

“attached” with the real objects they are supposed to correspond to, they appear to

be unstable, jittering, or swimming around

Handheld displays employ small computing devices with a display that the usercan hold in their hands (Fig.1.5) They use video-see-through techniques to overlaygraphics onto the real environment and employ sensors, such as digital compassesand GPS units for their six degree of freedom tracking sensors, fiducial markersystems, such as ARToolKit, and/or computer vision methods, such as SLAM Thereare currently three distinct classes of commercially available handheld displays thatare being used for augmented reality system: smart-phones, PDAs and Tablet PCs[18] Smart-phones are extremely portable and widespread, and with the recentadvances present a combination of powerful CPU, camera, accelerometer, GPS,and solid state compass, making them a very promising platform for AR However,their small display size is less than ideal for 3D user interfaces PDAs presentmuch of the same advantages and disadvantages of the smart-phones, but they arebecoming a lot less widespread than smart-phones since the most recent advances,with Android-based phones and iPhones Tablet PCs are a lot more powerful than

Trang 34

smart-phones, but they are considerably more expensive and too heavy for singlehanded, and even prolonged two-handed, use However, with the recent release ofiPad, we believe that Tablet PCs could become a promising platform for handheld

AR displays

Spatial Augmented Reality (SAR) make use of video-projectors, optical ments, holograms, radio frequency tags, and other tracking technologies to displaygraphical information directly onto physical objects without requiring the user towear or carry the display (Fig.1.6) [19] Spatial displays separate most of thetechnology from the user and integrate it into the environment This permits SAR tonaturally scale up to groups of users, thus allowing collaboration between users,increasing the interest for such augmented reality systems in universities, labs,museums, and in the art community There exist three different approaches to SARwhich mainly differ in the way they augment the environment: video-see-through,optical-see-through and direct augmentation In SAR, video-see-through displaysare screen based; they are a common technique used if the system does not have

ele-to be mobile as they are cost efficient since only off-the-shelf hardware nents and standard PC equipment is required Spatial optical-see-through displaysgenerate images that are aligned within the physical environment Spatial opticalcombiners, such as planar or curved mirror beam splitters, transparent screens, oroptical holograms are essential components of such displays [19] However, much

Trang 35

compo-like screen-based video see-through, spatial optical-see-through does not supportmobile applications due to spatially aligned optics and display technology Finally,projector-based spatial displays apply front-projection to seamlessly project imagesdirectly onto physical objects’ surfaces, such as in [20] More details about SARcan be read in [19] Table1.1shows a comparison of different types of displays’techniques for augmented reality.

3.2.2 Input Devices

There are many types of input devices for AR systems Some systems, such asReitmayr et al.’s mobile augmented system [17] utilizes gloves Others, such asReachMedia [22] use a wireless wristband In the case of smart-phones, the phoneitself can be used as a pointing device; for example, Google Sky Map on Androidphone requires the user to point his/her phone in the direction of the stars or planetss/he wishes to know the name of The input devices chosen depend greatly uponthe type of application the system is being developed for and/or the display chosen.For instance, if an application requires the user to be hands free, the input devicechosen will be one that enables the user to use his/her hands for the applicationwithout requiring extra unnatural gestures or to be held by the user, examples ofsuch input devices include gaze interaction in [23] or the wireless wristband used

in [22] Similarly, if a system makes use of a handheld display, the developers canutilize a touch screen input device

3.2.3 Tracking

Tracking devices consists of digital cameras and/or other optical sensors, GPS,accelerometers, solid state compasses, wireless sensors, etc Each of these tech-nologies has different level of accuracy and depends greatly on the type of systembeing developed In [24], the authors identified the general tracking technology foraugmented reality to be: mechanical, magnetic sensing, GPS, ultrasonic, inertia, andoptics In [25], the authors use a comparison from DiVerdi [26] based on range,setup, resolution, time, and environment We further adopted their comparisonmethod to this survey in Table1.2

3.2.4 Computers

AR systems require powerful CPU and considerable amount of RAM to processcamera images So far, mobile computing systems employ a laptop in a backpackconfiguration, but with the rise of smart-phones technology and iPad, we can hope

to see this backpack configuration replaced by a lighter and more sophisticatedlooking system Stationary systems can use a traditional workstation with a powerfulgraphics card

Trang 37

Table 1.2 Comparison of common tracking technologies (adapted from Papagiannakis et al [25 ] and DiVerdi et al [ 26 ]) Range: size of the region that can be tracked within Setup: amount of time for instrumentation and calibration Precision: granularity of a single output position Time: duration for which useful tracking data is returned (before it drifts too much) Environment: where the tracker can be used, indoors or outdoors

Range Setup Precision Technology (m) time (hr) (mm) Time (s) Environment

One of the most important aspects of augmented reality is to create appropriatetechniques for intuitive interaction between the user and the virtual content of ARapplications There are four main ways of interaction in AR applications: tangible

AR interfaces, collaborative AR interfaces, hybrid AR interfaces, and the emergingmultimodal interfaces

3.3.1 Tangible AR Interfaces

Tangible interfaces support direct interaction with the real world by exploiting theuse of real, physical objects and tools A classical example of the power of tangibleuser interfaces is the VOMAR application developed by Kato et al [27], whichenables a person to select and rearrange the furniture in an AR living room designapplication by using a real, physical paddle Paddle motions are mapped to intuitivegesture based commands, such as “scooping up” an object to select it for movement

or hitting an item to make it disappear in order to provide the user with an intuitiveexperience

Trang 38

there can be more than one mapping to actions or information possible, and differentpeople from different places, age-group, and culture have different meanings fordifferent objects So although this system might seem rather simple to use, it opensthe door to a main problem in user interfaces: showing the user how to utilize the realobjects for interacting with the system White et al.’s [29] solution was to providevirtual visual hints on the real object showing how it should be moved.

Another example of tangible AR interactions includes the use of gloves orwristband such as in [22] and [30]

3.3.2 Collaborative AR Interfaces

Collaborative AR interfaces include the use of multiple displays to support remoteand co-located activities Co-located sharing uses 3D interfaces to improve physicalcollaborative workspace In remote sharing, AR is able to effortlessly integratemultiple devices with multiple locations to enhance teleconferences

An example of co-located collaboration can be seen with Studierstube [13–15].When first presenting Studierstube in [13], the designers had in mind a userinterface that “uses collaborative augmented reality to bridge multiple user interfacedimensions: Multiple users, contexts, and locales as well as applications, 3D-windows, hosts, display platforms, and operating systems.”

Remote sharing can be used for enhancing teleconferences such as in [31] Suchinterfaces can be integrated with medical applications for performing diagnostics,surgery, or even maintenance routine

3.3.3 Hybrid AR Interfaces

Hybrid interfaces combine an assortment of different, but complementary interfaces

as well as the possibility to interact through a wide range of interaction devices[7] They provide a flexible platform for unplanned, everyday interaction where it

is not known in advance which type of interaction display or devices will be used

In [32], Sandor et al developed a hybrid user interface using head-tracked, through, head-worn display to overlay augmented reality and provide both visualand auditory feedbacks Their AR system is then implemented to support end users

see-in assignsee-ing physical see-interaction devices to operations as well as virtual objects

on which to perform those procedures, and in reconfiguring the mappings betweendevices, objects and operations as the user interacts with the system

Trang 39

3.3.4 Multimodal AR Interfaces

Multimodal interfaces combine real objects input with naturally occurring forms

of language and behaviors such as speech, touch, natural hand gestures, or gaze.These types of interfaces are more recently emerging Examples include MIT’s sixthsense [20] wearable gestural interface, called WUW WUW brings the user withinformation projected onto surfaces, walls, and physical objects through naturalhand gestures, arms movement, and/or interaction with the object itself Anotherexample of multimodal interaction is the work from Lee et al [23], which makesuse of gaze and blink to interact with objects This type of interaction is now beinglargely developed and is sure to be one of the preferred type of interaction for futureaugmented reality application as they offer a relatively robust, efficient, expressive,and highly mobile form of human-computer interaction that represent the users’preferred interaction style They have the capability to support users’ ability toflexibly combine modalities or to switch from one input mode to another depending

on the task or setting In addition, multimodal interfaces offer the freedom to choosewhich mode of interaction the user prefers to use depending on the context; i.e.public place, museum, library, etc This freedom to choose the mode of interaction

is crucial to wider acceptance of pervasive systems in public places [75]

Augmented reality systems can be divided into five categories: fixed indoor systems,fixed outdoor systems, mobile indoor systems, mobile outdoor systems, and mobileindoor and outdoor systems We define a mobile system as a system that allows theuser for movement that are not constrained to one room and thus allow the user tomove through the use of a wireless system Fixed system cannot be moved aroundand the user must use these systems wherever they are set up without having theflexibility to move unless they are relocating the whole system setup The choice ofthe type of system to be built is the first choice the developers must make as it willhelp the developers in deciding which type of tracking system, display choice andpossibly interface they should use For instance, fixed systems will not make use ofGPS tracking, while outdoor mobile system will In [25], the authors conducted astudy of different AR systems We conducted a similar study using 25 papers thatwere classified according to their type of system, and determined what the trackingtechniques, display type and interfaces were for each Tables1.3and1.4show theresults of the study and Table1.5the meaning of the abbreviation used in Tables1.3and1.4

The papers used for the study were all published between 2002 and 2010 with amajority of papers (17 papers out of 25) published between 2005 and 2010.Note that in the mobile indoor and outdoor systems, one of the system studied(Costanza’s eye-q [49]) does not use any tracking techniques, while others usemultiple type of tracking techniques This is due to the fact that this system was

Ngày đăng: 28/04/2014, 15:29

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
20. H. Fuchs, M.A. Livingston, R. Raskar, D. Colucci. K. Keller, A. State, J. R. Crawford, P. Rademacher, S. H. Drake, and A. A. Meyer, “Augmented Reality Visualization for Laparoscopic Surgery,” in Medical Image Computing and Computer-Assisted Intervention (MICCAI) 1998, LNCS 1496, pp. 934–943, 1998 Sách, tạp chí
Tiêu đề: Augmented Reality Visualization forLaparoscopic Surgery
21. W. Birkfellner M. Figl, K. Huber, F. Watzinger, F. Wanschitz, J. Hummel, R. Hanel, W. Greimel, P. Homolka, R. Ewers and H. Bergmann, “A head-mounted operating binocular for augmented reality visualization in medicine - Design and initial evaluation,” IEEE Trans.Med. Imag., vol. 21, pp. 991–997, Aug. 2002 Sách, tạp chí
Tiêu đề: A head-mounted operating binocularfor augmented reality visualization in medicine - Design and initial evaluation
22. H. Liao, S. Nakajima, M. Iwahara, E. Kobayashi, I. Sakuma, N. Yahagi, and T. Dohi, “Intra- operative real-time 3-D information display system based on integral videography,” The 4th International Conference on Medical Image Computing and Computer assisted Intervention – MICCAI 2001, LNCS 2208, pp. 392–400, 2001 Sách, tạp chí
Tiêu đề: Intra-operative real-time 3-D information display system based on integral videography
26. N. Herlambang, H. Liao, K. Matsumiya, K. Masamune, T. Dohi: Interactive autostereoscopic medical image visualization system using GPU-accelerated integral videography direct volume rendering, International Journal of Computer Assisted Radiology and Surgery, Vol. 3 supp. 1, pp. 110–111, 2008 Sách, tạp chí
Tiêu đề: Interactive autostereoscopic medical image visualization system using GPU-accelerated integral videography direct volume rendering
Tác giả: N. Herlambang, H. Liao, K. Matsumiya, K. Masamune, T. Dohi
Nhà XB: International Journal of Computer Assisted Radiology and Surgery
Năm: 2008
29. T. Sasama, N. Sugano, Y. Sato, Y. Momoi, Y. Nakajima, T. Koyama, Y. Nakajima, I. Sakuma, M. Fujie, K. Yonenobu, T. Ochi, S. Tamura, “A Novel Laser Guidance System for Alignment of Linear Surgical Tool: Its Principles and Performanve Evaluation as a Man-Machine System,”MICCAI 2002, pp. 125–132, 2002 Sách, tạp chí
Tiêu đề: A Novel Laser Guidance System for Alignment of Linear Surgical Tool: Its Principles and Performanve Evaluation as a Man-Machine System
Tác giả: T. Sasama, N. Sugano, Y. Sato, Y. Momoi, Y. Nakajima, T. Koyama, Y. Nakajima, I. Sakuma, M. Fujie, K. Yonenobu, T. Ochi, S. Tamura
Nhà XB: MICCAI 2002
Năm: 2002
30. N. Glossop, C. Wedlake, J. Moore, T. Peters, and Z. Wang, “Laser projection augmented reality system for computer assisted surgery,” in Proc. Int. Conf. Medical Image Computing and Computer Assisted Intervention (MICCAI), 2003, Lecture Notes in Computer Science, vol. 2879, pp. 239–246, 2003 Sách, tạp chí
Tiêu đề: Laser projection augmentedreality system for computer assisted surgery
31. H. Liao, H. Ishihara, H. H. Tran, K. Masamune, I. Sakuma, T. Dohi, “Precision-guided Surgical Navigation System using Laser Guidance and 3-D Autostereoscopic Image Overlay,”Computerized Medical Imaging and Graphics, Vol. 34, No. 1, pp. 46–54, 2010 Sách, tạp chí
Tiêu đề: Precision-guidedSurgical Navigation System using Laser Guidance and 3-D Autostereoscopic Image Overlay
24. H. H. Tran, K. Matsumiya, K. Masamune, I. Sakuma, T. Dohi, and H. Liao: Interactive 3D Navigation System for Image-guided Surgery, The International Journal of Virtual Reality, Vol. 8 No. 1, pp. 9–16, 2009 Khác
25. H. Liao, K. Nomura, T. Dohi: Autostereoscopic Integral Photography Imaging using Pixel Distribution of Computer Graphics Generated Image; ACM SIGGRAPH 2005, CD-ROM, Los Angeles, USA, July-August 2005 Khác
27. H. Liao, T. Inomata, I. Sakuma, and T. Dohi: 3-D Augmented Reality for MRI-guided Surgery using Integral Videography Autostereoscopic Image Overlay, IEEE Transactions on Biomedical Engineering, Vol. 57, No. 6, pp. 1476–1486, 2010 Khác
28. F.S. Pereles, H.T. Ozgur, P.J. Lund, E.C. Unger, Potentials of laser guidance system for percutaneous musculoskeletal procedures, Skeletal Radiology, Vol. 26, No. 11, pp. 650–653, 1997 Khác

TỪ KHÓA LIÊN QUAN