1. Trang chủ
  2. » Công Nghệ Thông Tin

Handbook of Multimedia for Digital Entertainment and Arts- P17 pptx

30 330 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Projector-Camera Systems in Entertainment and Art
Trường học Unknown
Chuyên ngành Digital Entertainment and Arts
Thể loại Report
Thành phố Unknown
Định dạng
Số trang 30
Dung lượng 639,08 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In contrast to other input technologies, such as embeddedelectronics for touch screens, tracked wand, or stylus and data gloves often used aug-in virtual environments; vision-based sensa

Trang 1

Physically Viewing Interaction

By projecting images directly onto everyday surfaces, a projector-camera systemmay be used for creating augmentation effects, such as virtually painting the ob-ject surface with a new color, new texture, or even an animation Users can interactdirectly with such projector-based augmentations For example, they may observethe object from different sides, while simultaneously experiencing consistent occlu-sion effects and depth, or they can move nearer or further from the object, to seelocal details and global views Thus, the intuitiveness of physical interaction andadvantages of digital presentation are combined

This kind of physically interactive visualization ability is suitable for use insituations when virtual content is mapped as a texture on real object surfaces.View-dependent visual effects such as highlighting to simulate virtually shiny sur-faces require tracking of the users’ view Multi-user views can also be supported

by time-multiplexing the projection for multiple users, with each user wearing asynchronized shutter glass allowing the selection of individual views But this isonly necessary for view-dependent augmentations Furthermore, view tracking andstereoscopic presentation ability enables virtual objects to be displayed not only

on the real surface, but also in front of or behind the surface A general geometricframework to handle all these variants is described in [26]

The techniques described above, only simulate the desired appearance of an mented object which is supposed to remain fixed in space To make the projectedcontent truly user-interactive, more information apart from viewpoint changes isrequired After turning an ordinary surface into a display, it is further desirable to ex-tend it to become a user interface with an additional input channel Thereby, camerascan be used for sensing In contrast to other input technologies, such as embeddedelectronics for touch screens, tracked wand, or stylus and data gloves often used

aug-in virtual environments; vision-based sensaug-ing technology has the flexibility to port different types of inputting techniques without modifying the display surface

sup-or equipping the users with different devices fsup-or different tasks Differing from teraction with special projection screens such as electronically enabled multi-touch

in-or rear-projected screens, some of the primary issues associated with vision-basedinteraction with front-projected interfaces are the illuminations on the detected handand object, as well as cast of shadows

In following subsections, two types of typical interaction approaches with spatialprojector-camera systems will be introduced, namely near distance interaction andfar distance interaction Vision based interaction techniques will be the main focusand basic interaction operations such as pointing, selecting and manipulation will

be considered

Near Distance Interaction

In near-distance situations where the projection surface is within arm’s length of theuser, finger touching or hand gestures are intuitive ways to select and manipulate the

Trang 2

interface Apart from this, the manipulation of physical objects can also be detectedand used for triggering interaction events.

Vision-based techniques may apply a visible light or infrared light camera tocapture the projected surface area To detect finger touching on a projected surface acalibration process, similar to the geometric techniques presented in section “Geo-metric Image Correction”, is needed to map corresponding pixels between projectorand camera

Next, fingers, hands and objects need to be categorized as part of the foreground

in order to separate them from the projected surface background When interactionstake place on a front-projected surface, the hand is illuminated by the displayedimages and thus the appearance of a moving hand changes quickly This renderssegmentation methods, based on skin color or region-growing methods as useless.Frequently, conventional background subtraction methods are also unreliable, sincethe skin color of a hand may become buried in the projected light

One possible solution to this problem is to expand the capacity of the backgroundsubtraction Despite, its application to an ideal projection screen which assumesenough color differences from skin color as in [27], the background subtractioncan also be used to take into account different background and foreground re-flectance factors When the background changes significantly, a segmentation mayfail An image update can be applied to keep the segmentation robust, where anartificial background may be generated from the known input image for a pro-jector with geometric and color distortions corrected between the projector andcamera

Another feasible solution is to detect the changing pixel area between the frames

of the captured video to obtain a basic shape of the moving hand or object Noisecan then be removed using image morphology Following this, a fingertip can bedetected by convolution with a fingertip-shaped template over the extracted image,

as in [28]

To avoid the complex varying illumination problem for visible light, an infraredcamera can be used instead, together with an infrared light source to produce in-visible shadow of a finger on the projected flat surface, as shown in [29] Theshadow of the finger can then be detected by the infrared camera and can thus besingularly used to detect the finger region and fingertip To enable screen intera-tion by finger touching, the positioning of the finger, either touching the surface

or hovering above it, can be further determined by detecting the occlusion ratio ofthe finger shadow When the finger is touching the surface, its shadow is fully oc-cluded by the finger itself; while the finger is hovering over the surface, its shadow

is larger

It is also possible to exclude the projected content from the captured video byinterlacing the projecting images and captured camera frames using synchronizedhigh-speed projectors and cameras, so that more general gesture recognition algo-rithms can be adopted as those reviewed in [30] To obtain more robust detectionresults, specific vision hardware can also be utilized, such as real-time depth cam-eras that are based on the time-of-flight principle [31]

Trang 3

Far Distance Interaction

In a situation where the projection surface is beyond the user’s arm length, laserpointer interaction is an intuitive way to select and manipulate projected interfacecomponents Recently, laser pointer interaction has used for interacting with largescale projection display or tiled display at a far distance [32]

To detect and track a laser dot on a projection surface in projector-camera tems, a calibrated camera covering the projecting area is often used The locationand movement of a laser dot can be detected simply by applying an intensity thresh-old to the captured image – assuming that the laser dot is much brighter than theprojection Since the camera and the projector are both geometrically calibrated, thelocation of the laser dot on the camera image can be mapped to corresponding pixels

sys-on projectisys-on image The “sys-on” and “off” status of the laser pointer can be mapped tomouse click events for selecting particular operations One or more virtual objectsthat are supposed to be intersected with the laser dot or a corresponding laser raycan be further calculated from the virtual scene geometry

More events for laser pointer interaction can be triggered by temporal or tial gestures, such as encircling, or simply by adding some hardware on laserpointers, such as buttons and embedded electronics for wireless communication.Multiple user laser pointer interaction can also be supported for large projectionareas where each user’s laser pointer is distinguishable This can be supported bytime-multiplexing the laser or by using different laser colors or patterns User stud-ies have been carried out to provide optimized design parameters for laser pointerinteraction [33]

spa-Although laser pointing is an intuitive technique, it also suffers from issuessuch as hand-jittering, inaccuracy and slow interaction speeds To overcome thehand-jittering problem, which is compounded at greater distances, filtering-basedsmoothing techniques can be applied, though may lead to discrepancy between thepointing laser dot and the estimated location Infrared laser pointers may solve thisproblem, but according to user study results, visible laser lights are still found to bebetter for interaction

Apart from laser pointing, other tools such as a tracked stylus or specially signed passive vision wands [34] tracked by a camera have proven to be flexible andefficient when interacting with large scale projection displays over distances.Gesture recognition provides a natural way for interaction in greater distanceswithout using specific tools It is mainly based on gesture pattern recognition with

de-or without hand model reconstruction Evaluating body motions is also an intuitiveway for large scale interaction, where the body pose and motion are estimated andbehavior patterns may be further detected When gesture and body motion are thedominant modes of interaction with projector-camera systems, shadows and varyingillumination conditions are the main challenges, though shadows can also be utilizedfor detecting gesture or body motion

In gesture or body interaction, background subtraction is often used for ing the moving body from the difference between the current frame and a referencebackground image The background reference image must be regularly updated so

Trang 4

detect-as to adapt to the varying luminance conditions and geometry settings More plex models have extended the concept of background subtraction beyond its literalmeaning A thorough review of the background extraction methods is presented

com-in [35]

Vision-based human action recognition approaches can be generally divided intofour phases The model initialization phase ensures that a system commences itsoperation with a correct interpretation of the current scene The tracking phase seg-ments and tracks the human bodies in each camera frame The pose estimation phaseestimates the pose of the users in one or more frames The recognition phase canrecognize the identity of individuals as well as the actions, activities and behaviorsperformed by one or more user Details about video based human action detectiontechniques are reviewed in [36]

Interaction with Handheld Projectors

Hand-held projectors may display images on surfaces anywhere at anytime whilethey are being moved by the user This is especially useful for mobile projector-based augmentation, which superimposes digital information in physical environ-ments Unlike other mobile displays such as provided by PDAs or mobile phones,hand-held projectors offer a consistent visual combination of real information gatherfrom physical surfaces with virtual information This is possible without contextswitching between information space and real space, thus seamlessly blurring thevirtual and real world They can be used, for instance, as interactive informationflashlights [37] – displaying registered image content on surface portions that areilluminated by the projector

Although hand-held projectors provide great flexibility for ubiquitous computingand spontaneous interaction, there are fundamental issues to be addressed before afluid interaction between the user and the projector is possible When using a hand-held projector to display on various surfaces in a real environment, the projectedimage will be dynamically modulated and distorted by the surfaces as the usermoves When the user stops moving the projector, the presented image still suf-fers from shaking by the user’s unavoidable hand-jitter Thus, a basic requirementfor hand-held projector interaction is to produce stable projection

Image Stabilizing

One often desired form of image stabilization is to produce a rectangular 2D image

on a planar surface – independently of the projector’s actual pose and movement

In this case, the projected image must be continuously warped to keep the correctaspect ratio and to remain undistorted The warping process is similar to the geo-metric correction techniques described earlier The difference, however, is that the

Trang 5

target viewing perspective is usually pointing towards the projection surface alongits normal direction, while the position of the hand-held projector may keep onchanging.

To find the geometric mapping between the projector and the target perspective,the projector’s six degrees of freedom may be obtained from an attached trackingdevice The homography is an adequate method to represent this geometric mappingwhen the projection surface is planar Instead of using the detected four vertices

of the visible projection area to calculate the homography matrix, another practicaltechnique is to identify laser spots displayed from laser-pointers that are attached

to the projector-camera system The laser spots are brighter and therefore easier todetect

In [38], hand-jittering was compensated together with the geometry tion, by continuously tracking the projector’s pose and warping the image at eachtime-step A camera attached to the projector detects visual markers on the projec-tion surface, that are used for warping the projected image accordingly In [42] asimilar stabilization approach is described Here, the projector pose relative to thedisplay surface is recovered up to an unknown translation in the display plane

Selection and Manipulation

Based on the display and direct pointing ability described above, mouse like tion can be emulated such as selecting a menu or performing a cut-and-paste oper-ation by pointing the cursors on the projected area and pressing buttons mounted

interac-on the projector However, in this scenario, the hand jitter problem, similar tolaser pointer interaction, also exists – making it difficult to locate the cursor inspecific and small areas The jitter problem is intensified when cursor pointing

is combined with mouse button-pressing operations Adopting specially designedinteraction techniques rather than emulating common desktop GUI methods, canalleviate this problem

Trang 6

One proven and efficient interaction technique for hand-held projectors is thecrossing based widget technique [37] Crossing based widget is operated by movingthe cursor to cross the widget in a specific direction (e.g from outside to inside, orfrom top to bottom), while holding the mouse button This technique avoids point-ing the cursor and pressing a button at the same time Crossing widget can be usedfor hand-held projectors to support commonly used desktop GUI elements, such asmenus and sliders Crossing based menu items can be activated by crossing fromone direction; and deactivated by crossing from the opposite direction All actionsare executed by releasing the mouse button Different colors can be used to indicatethe crossing directions Hierarchical menus can also be supported Similarly, thecrossing based slider is activated by crossing the interface in one direction, deacti-vated by crossing it in the opposite direction, and adjusted according to the cursormovement parallel to the slider.

Another specially designed interaction technique is called zoom-and-pick get, proposed by [39] It was designed to implement the simultaneous use of stablehigh-resolution visualization and pixel-accurate pointing for hand-held projectors.The widget is basically a square magnification area, located around the currentpointing cursor position A circular dead zone is defined within this area The center

wid-of the dead zone is treated as an interaction hot-spot The widget remains static whenthe pointing cursor is moving within the dead zone To gain pixel-accurate pointingability, a rim is defined around the dead zone Each crossing of the cursor from thedead zone into the rim triggers a single pixel movement of the widget in the direc-tion of the pointer movement If the pointer is moving beyond the dead zone and therim, the widget will be relocated to include the pointer in its dead zone again

Multi-user Interaction

Hand-held projectors also pose new chances and challenges for multi-user tion In contrast to other multi-user devices such as tabletop displays, primarily usedfor sharing information with others, or other mobile devices such as personal mo-bile phones; hand-held projectors, due to their portability, and personal usage, aresuitable both for shared and individual use Multiple hand-held projectors combinethe advantages of public and personal display systems

interac-The main issues associated with multi-user interaction and hand-held tors are primarily concerned with design for ownership, privacy control, sharing,and so on The name of the owner of a displayed object can be represented by spe-cially designed label widgets placed on the object and operated using crossing basedoperations The overlap of two or more cursors can signify consent from multipleusers to accomplish collaborative interactive task, such as coping a file or blendingtwo images between the users Snapping and docking actions can be performed bymultiple users in order to quickly view or modify connected information betweenmultiple objects Multiple displayed images from more than one user can be blendeddirectly or semantically By displaying high resolution images when the user moves

Trang 7

projec-closer to the display surface, a focus-and-context experience can be achieved byproviding refined local details More details can be found in [40].

Environment Awareness

Due to their portability, hand-held projectors are mainly used spontaneously fore, it is desirable to enhance the hand-held projectors with environment awarenessabilities Geometric and photometric measurement and object recognition and track-ing capacities, would enable the projector to sense and respond to the environmentaccordingly

There-Geometric and photometric awareness can be implemented using, for example,structured light techniques, as described in section “Structured Light Scanning” Forobject recognition and tracking, the use of a passive fiducial marker (e.g., supportedwith open source computer vision toolkits such as ARToolkit[41]) is a cheap solu-tion However, it is not visually attractive which may disturb the appearance of theobject and may fail as a result of occlusion or low illumination Unpowered pas-sive RFID tags can be detected via a radio frequency reader without being visible.They represent another inexpensive solution for object identification However, they

do not support pose tracking The combination of RFID tags with photo-sensors,called RFIG, has been developed in order to obtain both – object identification andobject position The detection of the object position is implemented by projectingGray codes onto the photo-sensors In this way the Gray code is sensed by eachphoto-sensor and allows computing the projection of the sensors to the projectorimage plane, and consequently enables projector registration More details aboutRFIG are referred to [42]

Interaction Design and Paradigm

In the sections above, techniques for human interaction with different tions of projector-camera systems were presented This subsection, however, willintroduce higher level concepts and methods for interaction design and interactionparadigms for such devices Alternative configurations such as steerable projectorand moveable surfaces will also be discussed briefly

configura-Projector-based systems for displaying virtual environments assume high ity, large field of view, and continuous display areas which often evoke feelings

qual-of immersion and presence, and provide continuous interaction spaces In contrast,spatial projector-camera systems that display on everyday surfaces may produceblended and warped images with average quality and a cropped field of view Thecropped view occurs as a result of the constricted display area, discontinuous im-ages on different depth levels, and surfaces with different modulation properties.Due to these discrepancies, it is not always possible to directly adopt interactiontechniques from immersive virtual environments or from conventional augmentedreality applications

Trang 8

For example, moving a virtual object using the pointing-and-drag technique,which is often adopted in virtual environments, may not be the preferred method

in a projector-based augmented environment, since the appearance of the virtual ject may vary drastically as it is moved and displayed on discontinuous surfaces withdifferent depths and material properties Instead, grasp-and-drop techniques may bebetter suited to this situation, as discussed in [43]

ob-Furthermore, the distance between the user and display surface is important fordesigning and selecting interaction techniques It was expected that pointing interac-tion is more suitable for manipulating far distance objects, while touching is suitablefor near distance objects However, contradictory findings, derived from user studiesfor interaction with projector-camera systems aimed for implementing augmentedworkspace [43], have proven otherwise Users were found unwilling to touch thephysical surfaces even at close range distances after they learned distance gesturessuch as pointing Instead, users frequently continued using the pointing method,even for surfaces located in close proximity to them The reason for this behaviormay be two-fold Firstly, users may prefer to use a consistent technique for manipu-lation such as pointing Secondly, it seems that the appearance and materials of thesurfaces affect the user’s willingness to interact with them [44]

Several interaction paradigms have been introduced with or for projector-camerasystems Tangible user interfaces were developed to manipulate projected contentusing physical tangible objects [45] Vision based implicit interaction techniqueshave also been applied to support subtle and persuasive display concepts derivedfrom ubiquitous computing [46] The peephole paradigm is discussed as a concept

to describe the projected display as a peephole for the physical environment [47].Varying bubble-like free-form shapes of the projected area based on the environmentenables a new interface that moves beyond regular fixed display boundaries [48].Besides hand-held projectors which enable ubiquitous display, steerable projec-tors also bring new interaction concepts, such as everywhere displays Such systemsenable projections on different surfaces in a room, and to turn them into an interac-tion interfaces The best way to control a steerable projector during the interaction,however still needs to be determined Body tracking can be combined with steer-able projections to produce a paradigm called user-following display [49], wherethe user’s position and pose are tracked Projection surfaces are then dynamicallyselected and modulated accordingly, based on a measured and maintained three-dimensional model of the surfaces in the room Alternatively, laser pointers can beused and tracked by a pan/tilt/zoom camera to control and interact with a steer-able projector unit [50] Another issue for interaction with steerable projectors isthe question of how to support a dynamic interfaces which can change form andlocation on the fly A vision-based approach can solve this problem by decouplinginterface specifications from its location in space and in the camera image [51].Besides the projectors themselves, projection surfaces might also be moveablerather than remain static in the environment They may be rigidly moveable flatscreens, semi-rigidly foldable objects such as a fan or an umbrella, or deformableobjects such as paper and cloth Moveable projection surfaces can provide novelinterfaces and enable unique interaction paradigms such as foldable displays or

Trang 9

organic user interfaces [52] Tracking the pose or deformation of such surfaces, ever, is an issue that still needs to be addressed Cheap hardware trackers have beenused recently to support semi-rigid surfaces [53] Vision-based deformation detec-tion algorithms may be useful in future for supporting deformable display surfaces.

how-Application Examples

The basic visualization and interaction techniques that have been presented inthe sections above enable a variety of new applications in different domains Ingeneral, projector-camera systems can be applied to interactive or non-interactivevisual presentations in situations where the application of projection screens is notpossible, or not desired Several examples are outlined below

Embedded Multimedia Presentations

Many historic sites, such as castles, caves, or churches, are open to public Flatpanel displays or projection screens are frequently being used for presenting vi-sual information These screens, however, are permanently installed features andunnecessarily cover a certain amount of space They cannot be temporally disas-sembled to give the visitors an authentic impression of the environment’s ambiencewhen required

Being able to project undistorted images onto arbitrary existing surfaces offers

a potential solution to this problem Projectors can display images that are muchlarger than the device itself The images can be seamlessly embedded, and turnedoff any time to provide an unconstrained experience For these reasons, projector-camera systems and image correction techniques are applied in several professionaldomains, such as historic sites, theater, festivals, museums, public screen presenta-tions, advertisement displays, theme parks, and many others Figure2illustrates twoexamples for a theater stage projection at the Karl-May Festival in Elspe (Germany),and an immersive panoramic projection onto the walls of the main tower of castleOsterburg in Weida (Germany) Both are used for displaying multimedia contentwhich is alternately turned on and off during the main stage performance and themuseum presentation respectively Other examples of professional applications can

be found at www.vioso.com

Superimposing Museum Artifacts

Projector-camera systems can also be used for superimposing museum artifacts withpictorial content This helps to communicate information about the displayed ob-jects more efficiently than secondary screens

Trang 10

Fig 2 Projection onto physical stage setting (top), and 360 degree surround projection onto ral stone walls in castle tower (bottom) Image courtesy: VIOSO GmbH, www.vioso.com

natu-In this case, a precise registration of the projector-camera system is not only essary to ensure an adequate image correction (e.g., geometrically, photometrically,and focus), but also for displaying visual content that is geometrically registered tothe corresponding parts of the object

nec-Figure 3 illustrates two examples for superimposing visual content, such ascolor, text and image labels, interactive visualizations of magnifications and un-derdrawings, and visual highlights on replicas of a fossil (primal horse displayed

by Senckenberg Museum Frankfurt, Germany) and paintings (Michaelangelo’sCreation of Adam, sanguine and Pontormo’s Joseph and Jacob in Egypt, oil onwood) [22]

In addition to augmenting an arbitrary image content, it is also possible to boostthe contrast of low contrast objects, such as paintings whose colors have faded after

a long exposure to sun light The principle techniques describing how this can beachieved are explained in [19]

Spatial Augmented Reality

Projector-camera systems cannot only acquire parameters that are necessary for age correction, but also higher level information, such as the surrounding scenegeometry This, for instance, enables corrected projections of stereoscopic images

Trang 11

im-Fig 3 Fossil replica superimposed with projected color (top), and painting replicas augmented with interactive pictorial content (bottom) [ 22 ]

onto real-world surfaces which allows the augmentation of three-dimensional teractive content Active stereoscopic shutter glasses and head-tracking technologysupports correct depth viewing of virtual content in precise alignment with the phys-ical environment This is a projector-based variation of what is referred to as spatialaugmented reality [23] In contrast to mobile augmented realities, the display tech-nology for spatial augmented reality applications is not hand-held or head-worn, butfixed in the environment This has several technological advantages, but also limitsthe applications to non-mobile ones

in-Figure4illustrates two projector-based spatial augmented reality examples: Anarchitectural global lighting simulation is projected directly within the real environ-ment enabling a more realistic and immersive visualization than possible with only

a monitor Stereoscopically projected game content can interact with real objects Aphysical simulation of the virtual car allows realistic collisions with real items This

is possible through the scanned scene geometry, which also enables correct sion effects Object recognition techniques that are applied to the acquired scenegeometry and to the captured camera image enable the derivation of contextualinformation that is used in the game logic Motorized pan-tilt projector-camera unitsallow using large parts of an entire room as playground for such spatial augmentedreality games More information on spatial augmented reality can be found in [23]

occlu-A free e-book is available at www.Spatialocclu-AR.com

Trang 12

Fig 4 Examples for spatial augmented reality applications with projector camera systems An immersive in-place visualization of an architectural lighting simulation (left), and a stereoscopi- cally projected spatial augmented reality game (right) Door, window, illumination and the car are projected

Flexible Digital Video Composition

Blue screens and chroma keying technology are essential for digital video sition Professional studios apply tracking technology to record the camera path forperspective augmentations of original video footage Although this technology iswell established, it does not offer a great deal of flexibility

compo-For shootings at non-studio sets, physical blue screens can be installed and takesmight have to be recorded twice (with and without blue screens), or parts have to bere-recorded in a studio

In addition, virtual studio technology itself still faces limitations Chroma-keyingand studio illumination, for instance, are difficult to harmonize Moderators or actorshave to spend a fair amount of practice time before interacting with invisible virtualcomponents naturally Spill on the foreground and disadvantageous foreground col-ors lead to low-quality or even unusable keying results

Temporally synchronized projector-camera systems can be used to project rected keying patterns and other spatial codes onto arbitrary diffuse (real-world)surfaces Therefore the reflectance of the underlying real surface is widely neutral-ized by applying the image correction techniques that have been explained above.The main difference to the application examples that have been described so far,

cor-is that projector-camera systems are used for recording vcor-isual effects, and not forpresenting corrected visual content directly to human observers

A temporal multiplexing between projection (p-frames) and flash illumination(i-frames) allows capturing the fully lit scene, while still being able to key the fore-ground objects This is illustrated in figure5

Since the entire scene is recorded when physical blue screens do not block theview, the footage of the full background scene can be used for video composition.Thus, recordings need not be taken twice, and keying is invariant to foregroundcolors In addition, other spatial codes can be embedded into the projected im-ages to enable tracking of the camera, environment matting, and displaying in-place

Trang 13

Fig 5 VirtualStudio2Go: Odd (i-) frames record the fully illuminated scene Even (p-) frames record the non-illuminated scene with projected images that neutralize the appearance of a real background surface and display code patterns Repeating this at HD scanning speed (59.94Hz) and registering both sub-frames during post-processing supports high quality digital video composition effects for real (non-studio) environments

moderator information Furthermore, the reconstruction of the scene geometry isimplicitly supported, and allows special composition effects, such as shadow casts,occlusions and reflections

A concept that combines all of these techniques into one single compact andportable system that is fully compatible with common digital video compositionpipelines, and offers an immediate plug-and-play applicability is presented in [24] Itenables professional digital video composition effects in real indoor environments

Interactive Attraction Installations

Today, the most popular applications of projector-camera systems are perhaps teractive attractions as public installations By projecting interactive graphics ontoeveryday surfaces in public places, such as walls in museums, floors in shop-ping mall, subway tunnels, and even dining tables in restaurants, projector-camerasystems emerge as an effective attraction tool by creating vivid interactive art, enter-tainment, and advertisement experience for people Vision based sensing technologycan be mainly adopted in such interactive art systems to detect people’s presence andactivity in an unobtrusive way, and implicitly engaging people with the artificiallyaugmented environment through large scale human body motions, hand gesture, orfinger touching interactions with such installations

in-Figure6illustrates two projector-based interactive attraction installations In theleft example, a realistically rendered water pool is projected directly onto the ground

of the Lou Dong Chinese Painting Museum at Tai Cang (China) with a physicallybuilt pool boundary, where the rendered water, lotus, and fishes in the pool are allresponsive to the visitors who step into it Ripple effects in the water, bloominglutoses, and escaping fishes, have been rendered in this way [54] A Chinese painting

is mapped as texture on the ground of the pool to compliment the artwork Sincethis system has been installed in the museum, its realistic appearance and vivid

Trang 14

Fig 6 Examples for interactive attraction installations An interactive water pool installed in a traditional art museum (left) and an interactive augmented physical map installation for tourist attraction (right)

interactivity have attracted many visitors, especially young children who have littlecontact with traditional culture

In another installation which was exhibited for an art-science festival inShanghai’s Oriental Pearl Tower (China), a shining icon is projected onto a tra-ditional physical tourist map The tourists can select different sites by hands orprops on the map to see related video information A tour guide can then create

a touring path on the map with a laser pointer, or a visitor could produce a path

by walking on a projected map on the ground A three dimensional walk-through

of the tour scene can then be triggered along the created path [55] By integratingthe traditional tangible map with the augmented digital information, and enablingvision-based and tangible interaction techniques, the projector-camera system canprovide tourists and tour guides with a fresh sightseeing experience

The Future of Projector-Camera Systems

Projector-camera systems have already found practical applications in theater, seums, historic sites, open-air festivals, trade shows, advertisement, visual effects,theme parks, and art installations With advances in technology and techniques, theywill be applied in many more areas in future

mu-Future projectors will become more compact in size and will require little powerand cooling Reflective technology (such as DLP or LCOS) will increasingly replacetransmissive technology (e.g., LCD) This will leads to an increased brightness andextremely high update rates GPUs for real-time graphics and vision processingwill also be integrated While resolution, contrast and speed will keep increas-ing, production costs and market prizes will continue to fall Conventional UHPlamps will be replaced by powerful LEDs or multi-channel lasers This will makethem suitable for mobile applications Projector-camera technology is currently be-ing integrated into mobile devices, such as cellphones, and supports truly flexible

Trang 15

presentations methods Image correction techniques, such as the ones explainedabove are essential for these devices, since projection screens will most likely notbecome mobile.

But projector-camera systems will not only be used as display devices In future,they will also enable intelligent, spatially and temporally controllable light sources.Projector-based illumination techniques will not only solve problems in professionaldomains, such as microscopy or endoscopy, but -one day- might also be applied inmore general contexts

Imagine that networked projector-camera systems become as cheap and as pact as light bulbs They could not only be turned on and off, but would allow tooffer synthetic room illumination and interactive display capabilities everywhere.For instance, they could produce individual mood profile and ambient light situa-tions, as well as to enable internet access wherever you stand

Radio-on Projector-Camera Systems (ProCams), 2003.

7 T Yoshida, C Horii, and K Sato, “A Virtual Color Reconstruction System for Real Heritage with Light Projection,” Proceedings of International Conference on Virtual Systems and Mul- timedia (VSMM), pp 161–168, 2003.

8 M Ashdown, T Okabet, I Sato, and Y Sato, “Robust Content-Dependent Photometric jector Compensation,” Proceedings of IEEE International Workshop on Projector-Camera Systems (ProCams), 2006.

Pro-9 A Grundhoefer and O Bimber, “Real-Time Adaptive Radiometric Compensation,” IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol 14, No 1, pp 97–108, 2008.

10 H Park, M.-H Lee, S.-J Kim, and J.-I Park, “Specularity-Free Projection on Nonplanar Surface,” Proceedings of Pacific-Rim Conference on Multimedia (PCM) (2005), pp 606–616.

11 R Sukthankar, C Tat-Jen, and G Sukthankar, “Dynamic Shadow Elimination for Projector Displays,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vol II, pp 151–157, 2001.

Multi-12 C Jaynes, S Webb, and R M Steele, “Camera-Based Detection and Removal of Shadows from Interactive Multiprojector Displays,” IEEE Transactions on Visualization and Computer Graphics (TVCG), Vol 10, No 3, pp 290–301, 2004.

Ngày đăng: 02/07/2014, 02:20

TỪ KHÓA LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm