1. Trang chủ
  2. » Công Nghệ Thông Tin

banterle, artusi, debattista, chalmers - advanced high dynamic range imaging

276 348 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Advanced High Dynamic Range Imaging
Tác giả Francesco Banterle, Alessandro Artusi, Kurt Debattista, Alan Chalmers
Người hướng dẫn Holly Rushmeier
Trường học Yale University
Chuyên ngành Image Processing / Computer Graphics
Thể loại book
Năm xuất bản 2011
Thành phố Boca Raton
Định dạng
Số trang 276
Dung lượng 10,13 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Holly Rushmeier Alessandro Artusi Kurt Debattista Alan Chalmers Advanced High Dynamic Range Imaging High dynamic range HDR imaging is the term given to the capture, storage, manipu-lati

Trang 1

Holly Rushmeier Alessandro Artusi

Kurt Debattista Alan Chalmers

Advanced High Dynamic Range

Imaging

High dynamic range (HDR) imaging is the term given to the capture, storage,

manipu-lation, transmission, and display of images that more accurately represent the wide

range of real-world lighting levels With the advent of a true HDR video system and its

20 year history of creating static images, HDR is finally ready to enter the “mainstream”

of imaging technology This book provides a comprehensive practical guide to facilitate

the widespread adoption of HDR technology By examining the key problems

associ-ated with HDR imaging and providing detailed methods to overcome these problems,

the authors hope readers will be inspired to adopt HDR as their preferred approach for

imaging the real world Key HDR algorithms are provided as MATLAB code as part of

the HDR Toolbox

“This book provides a practical introduction to the emerging new discipline of high

dynamic range imaging that combines photography and computer graphics By

providing detailed equations and code, the book gives the reader the tools needed

to experiment with new techniques for creating compelling images.”

—From the Foreword by Holly Rushmeier, Yale University

Download MATLAB source code for the book at

www.advancedhdrbook.com

Francesco Banterle • Alessandro Artusi

Kurt Debattista • Alan Chalmers

Foreword by Holly Rushmeier

Trang 2

Advanced High Dynamic Range

Imaging

Trang 4

Advanced High Dynamic Range

Imaging

Theory and Practice

Francesco Banterle Alessandro Artusi Kurt Debattista Alan Chalmers

A K Peters, Ltd.Natick, Massachusetts

Trang 5

6000 Broken Sound Parkway NW, Suite 300

Boca Raton, FL 33487-2742

© 2011 by Taylor & Francis Group, LLC

CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S Government works

Version Date: 20120202

International Standard Book Number-13: 978-1-4398-6594-1 (eBook - PDF)

This book contains information obtained from authentic and highly regarded sources able efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use The authors and publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained If any copyright material has not been acknowledged please write and let us know so

Reason-we may rectify in any future reprint.

Except as permitted under U.S Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.

For permission to photocopy or use material electronically from this work, please access www copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400 CCC is a not-for-profit organiza- tion that provides licenses and registration for a variety of users For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and

are used only for identification and explanation without intent to infringe.

Visit the Taylor & Francis Web site at

http://www.taylorandfrancis.com

and the CRC Press Web site at

http://www.crcpress.com

Trang 6

Dedicated to all of you: Franca, Nella, Sincero, Marco, Giancarlo,

and Despo You are always in my mind —AA

To Alex Welcome! —KD

To Eva, Erika, Andrea, and Thomas You are my reality! —AC

Trang 8

1 Introduction 1

1.1 Light, Human Vision, and Color Spaces 4

2 HDR Pipeline 11 2.1 HDR Content Generation 12

2.2 HDR Content Storing 22

2.3 Visualization of HDR Content 26

3 Tone Mapping 33 3.1 TMO MATLAB Framework 36

3.2 Global Operators 38

3.3 Local Operators 61

3.4 Frequency-Based Operators 75

3.5 Segmentation Operators 86

3.6 New Trends to the Tone Mapping Problem 103

3.7 Summary 112

4 Expansion Operators for Low Dynamic Range Content 113 4.1 Linearization of the Signal Using a Single Image 115

4.2 Decontouring Models for High Contrast Displays 119

4.3 EO MATLAB Framework 121

4.4 Global Models 122

4.5 Classification Models 128

4.6 Expand Map Models 134

4.7 User-Based Models: HDR Hallucination 144

4.8 Summary 145

vii

Trang 9

5 Image-Based Lighting 149

5.1 Environment Map 149

5.2 Rendering with IBL 155

5.3 Summary 174

6 Evaluation 175 6.1 Psychophysical Experiments 175

6.2 Error Metric 187

6.3 Summary 190

7 HDR Content Compression 193 7.1 HDR Compression MATLAB Framework 193

7.2 HDR Image Compression 194

7.3 HDR Texture Compression 205

7.4 HDR Video Compression 218

7.5 Summary 225

Trang 10

We perceive the world through the scattering of light from objects to oureyes Imaging techniques seek to simulate the array of light that reaches oureyes to provide the illusion of sensing scenes directly Both photographyand computer graphics deal with the generation of images Both disciplineshave to cope with the high dynamic range in the energy of visible light thathuman eyes can sense Traditionally photography and computer graphicstook different approaches to the high dynamic range problem Work overthe last ten years, though, has unified these disciplines and created powerfulnew tools for the creation of complex, compelling, and realistic images.This book provides a practical introduction to the emerging new discipline

of high dynamic range imaging that combines photography and computergraphics

Historically, traditional wet photography managed the recording of highdynamic range imagery through careful design of camera optics and thematerial layers that form film The ingenious processes that were inventedenabled the recording of images that appeared identical to real-life scenes.Further, traditional photography facilitated artistic adjustments by thephotographer in the darkroom during the development process However,the complex relationship between the light incident on the film and thechemistry of the material layers that form the image made wet photogra-phy unsuitable for light measurement

The early days of computer graphics also used ingenious methods towork around two physical constraints—inadequate computational capabil-ities for simulating light transport and display devices with limited dynamicrange To address the limited computational capabilities, simple heuristicssuch as Phong reflectance were developed to mimic the final appearance

of objects By designing heuristics appropriately, images were computedthat always fit the narrow display range It wasn’t until the early 1980s

ix

Trang 11

that computational capability had increased to the point that full lightingsimulations were possible, at least on simple scenes.

I had my own first experience with the yet-unnamed field of high namic range imaging in the mid-1980s I was studying one particular ap-proach to lighting simulation—radiosity I was part of a team that designedexperiments to demonstrate that the lengthy computation required for fulllighting simulation gave results superior to results using simple heuristics.Naively, several of us thought that simply photographing our simulatedimage from a computer screen and comparing it to a photograph of a realscene would be a simple way to demonstrate that our simulated image wasmore accurate Our simple scene, now known as the Cornell box, was just

dy-an empty cube with one blue wall, one red wall, a white wall, a floor dy-andceiling, and a flat light source that was flush with the cube ceiling Wequickly encountered the complexity of film processing For example, thevery red light from our tungsten light source, when reflected from a whitesurface, looked red on film—if we used the same film to image our com-puter screen and the real box Gary Meyer, a senior member of the teamwho was writing his dissertation on color in computer graphics, patientlyexplained to us how complicated the path was from incident light to therecorded photographic image

Since we could not compare images with photography, and we had nodigital cameras at the time, we could only measure light directly with aphotometer that measured light over a broad range of wavelengths and in-cident angles Since this gave only a crude evaluation of the accuracy ofthe lighting simulation, we turned to the idea of having people view thesimulated image on the computer screen and the real scene directly throughview cameras to eliminate obvious three-dimensional cues However, here

we encountered the dynamic range problem since viewing the light sourcedirectly impaired the perception of the real scene and simulated scene to-gether Our expectation was that the two would look the same, but colorconstancy in human vision wreaked havoc with simultaneously displaying

a bright red tungsten source and the simulated image with the light sourceclipped to monitor white Our solution at that time for the comparisonwas to simply block the direct view of the light source in both scenes Wesuccessfully showed that in images with limited dynamic range, our simu-lations were more accurate when compared to a real scene than previousheuristics, but we left the high dynamic range problem hanging

Through the 1980s and 1990s lighting simulations increased in efficiencyand sophistication Release of physically accurate global illumination soft-ware such as Greg Ward’s Radiance made such simulations widely acces-sible For a while users were satisfied to scale and clip computed values

in somewhat arbitrary ways to map the high dynamic range of computedimagery to the low dynamic range cathode ray tube devices in use at the

Trang 12

time Jack Tumblin, an engineer who had been working on the problem ofpresenting high dynamic range images in flight simulators, ran across thework in computer graphics lighting simulation and assumed that a princi-pled way to map physical lighting values to a display had been developed

in computer graphics Finding out that in fact there was no such principledapproach, he began mining past work in photography and television thataccounted for human perception in the design of image capture and displaysystems, developing the first tone mapping algorithms in computer graph-ics Through the late 1990s the research community began to study alter-native tone mapping algorithms and to consider their usefulness in increas-ing the efficiency of global illumination calculations for image synthesis

At the same time, in the 1980s and 1990s the technology for the tronic recording of digital images steadily decreased in price and increased

elec-in ease of use Researchers elec-in computer vision and computer graphics, such

as Paul Debevec and Jitendra Malik at Berkeley, began to experiment withtaking series of digital images at varying exposures and combining theminto true high dynamic range images with accurate recordings of the inci-dent light The capability to compute and capture true light levels opened

up great possibilities for unifying computer graphics and computer vision.Compositing real images with synthesized images having consistent lightingeffects was just one application Examples of other processes that becamepossible were techniques to capture real lighting and materials with digitalphotography that could then be used in synthetic images

With new applications made possible by unifying techniques from tal photography and accurate lighting simulation came many new problems

digi-to solve and possibilities digi-to explore Tone mapping was found not digi-to be

a simple problem with just one optimum solution but a whole family ofproblems There are different possible goals: images that give the viewerthe same visual impression as viewing the physical scene, images that arepleasing, or images that maximize the visibility of detail There are manydifferent contexts, such as dynamic scenes and low-light conditions There

is a great deal of low dynamic range imagery that has been captured andgenerated in the past; how can this be expanded to be used in the samecontext as high dynamic range imagery? What compression techniques can

be employed to deal with the increased data generated by high dynamicrange imaging systems? How can we best evaluate the fidelity of displayedimages?

This book provides a comprehensive guide to this exciting new area Byproviding detailed equations and code, the book gives the reader the toolsneeded to experiment with new techniques for creating compelling images

—Holly RushmeierYale University

Trang 14

The human visual system (HVS) is remarkable Through the process of eyeadaptation, our eyes are able to cope with the wide range of lighting in thereal world In this way we are able to see enough to get around on a starlitnight and can clearly distinguish color and detail on a bright sunny day.Even before the first permanent photograph in 1826 by Joseph Nic´ephoreNi´epce, camera manufacturers and photographers have been striving tocapture the same detail a human eye can see Although a color photographwas achieved as early as 1861 by James Maxwell and Thomas Sutton [130],and an electronic video camera tube was invented in the 1920s, the ability

to simultaneously capture the full range of lighting that the eye can see

at any level of adaptation continues to be a major challenge The lateststep towards achieving this “holy grail” of imaging was in 2009 when avideo camera capable of capturing 20 f-stops (1920× 1080 resolution) at

30 frames a second was shown at the annual ACM SIGGRAPH conference

by the German high-precision camera manufacturer Spheron VR and theInternational Digital Laboratory at the University of Warwick, UK.High dynamic range (HDR) imaging is the term given to the capture,storage, manipulation, transmission, and display of images that more ac-curately represent the wide range of real-world lighting levels With theadvent of a true HDR video system, and from the experience of morethan 20 years of static HDR imagery, HDR is finally ready to enter the

“mainstream” of imaging technology The aim of this book is to provide

a comprehensive practical guide to facilitate the widespread adoption ofHDR technology By examining the key problems associated with HDRimaging and providing detailed methods to overcome these problems, to-gether with supportingMatlab code, we hope readers will be inspired toadopt HDR as their preferred approach for imaging the real world

xiii

Trang 15

Advanced High Dynamic Range Imaging covers all aspects of HDR

imag-ing from capture to display, includimag-ing an evaluation of just how closely theresults of HDR processes are able to recreate the real world The book

is divided into seven chapters Chapter 1 introduces the basic concepts.This includes details on the way a human eye sees the world and how thismay be represented on a computer Chapter 2 sets the scene for HDRimaging by describing the HDR pipeline and all that is necessary to cap-ture real-world lighting and then subsequently display it Chapters 3 and 4

investigate the relationship between HDR and low dynamic range (LDR)

content and displays The numerous tone mapping techniques that havebeen proposed over more than 20 years are described in detail in Chap-ter 3 These techniques tackle the problem of displaying HDR content in

a desirable manner on LDR displays In Chapter 4, expansion operators,generally referred to as inverse (or reverse) tone mappers (iTMOs), areconsidered part of the opposite problem: how to expand LDR content fordisplay on HDR devices A major application of HDR technology, imagebased lighting (IBL), is considered in Chapter 5 This computer graphicsapproach enables real and virtual objects to be relit by HDR lighting thathas been previously captured So, for example, the CAD model of a carmay be lit by lighting previously captured in China to allow a car designer

to consider how a particular paint scheme may appear in that country.Correctly applied IBL can thus allow such hypothesis testing without theneed to take a physical car to China Another example could be actorsbeing lit accurately as if they were in places they have never been Manytone mapping and expansion operators have been proposed over the years.Several of these attempt to create as accurate a representation of the realworld as possible within the constraints of the LDR display or content.Chapter 6 discusses methods that have been proposed to evaluate just howsuccessful tone mappers have been in displaying HDR content on LDR de-vices and how successful expansion methods have been in generating HDRimages from legacy LDR content Capturing real-world lighting generates

a large amount of data The HDR video camera shown at SIGGRAPHrequires 24 MB per frame, which equates to almost 42 GB for a minute

of footage (compared with just 9 GB for a minute of LDR video) The

fi-nal chapter of Advanced High Dynamic Range Imaging examines the issues

of compressing HDR imagery to enable it to be manageable for storage,transmission, and manipulation and thus practical on existing systems

Trang 16

nature of Matlab allows it to rapidly demonstrate many algorithms in

an intuitive manner It is for this reason we have chosen to include thekey HDR algorithms asMatlab code as part of what we term the HDR Toolbox An overview of the HDR Toolbox is given in Appendix C In Advanced High Dynamic Range Imaging, the common parts of Matlabcode are presented at the beginning of each chapter The remaining codefor each technique is then presented at the point in the chapter where thetechnique is described The code always starts with the input parametersthat the specific method requires

For example, in Listing 1, the code segment for the Schlick tone mappingoperator, the method takes the following parameters as input: schlickmode specifies the type of model of the Schlick technique used We may

Trang 17

have three cases: standard, calib, and nonuniform modes The standard

mode takes the parameter p as input from the user, while the calib and

nonuniform modes are using the uniform and nonuniform quantization

techniques, respectively The variable schlick p is the parameter p or

p  depending on the mode used, schlick bit is the number of bits N

of the output display, schlick dL0 is the parameter L0, and schlick k

is the parameter k The first step is to extract the luminance channel

from the image and the maximum, L Max, and the minimum luminance,

L Min These values can be used for calculating p Afterwards, based on

the selection mode, one of the three modalities is chosen and the parameter

p either is given by the user (standard mode) or is equal to Equation (3.9)

or to Equation (3.10) Finally, the dynamic range of the luminance channel

is reduced by applying Equation (3.8)

Acknowledgements

Many people provided help and support during my doctoral research andthe writing of this book Special thanks go to the wonderful colleagues,staff, and professors I met during this time in Warwick and Bristol: Patrick,Kurt, Alessandro, Alan, Karol, Kadi, Luis Paulo, Sumanta, Piotr, Roger,Matt, Anna, Cathy, Yusef, Usama, Dave, Gav, Veronica, Timo, Alexa, Ma-rina, Diego, Tom, Jassim, Carlo, Elena, Alena, Belma, Selma, Jasminka,Vedad, Remi, Elmedin, Vibhor, Silvester, Gabriela, Nick, Mike, Giannis,Keith, Sandro, Georgina, Leigh, John, Paul, Mark, Joe, Gavin, Maximino,Alexandrino, Tim, Polly, Steve, Simon, and Michael The VCG Labo-ratory ISTI-CNR generously gave me time to write and were supportivecolleagues

I am heavy with debt for the support I have received from my family

My parents, Maria Luisa and Renzo; my brother Piero and his wife, Irina;and my brother Paolo and his wife, Elisa Finally, for her patience, goodhumor, and love during the writing of this book, I thank Silvia

—Francesco Banterle

This book started many years ago when I decided to move from ColorScience to Computer Graphics Thanks to this event, I had the opportu-nity to move to Vienna and chose to work in the HDR field I am verygrateful to Werner Purgathofer who gave me the possibility to work andstart my PhD at the Vienna University of Technology and also the chance

to know Meister Eduard Groeller I am grateful to my coauthors: AlanChalmers gave me the opportunity to share with him this adventure thatstarted in a taxi driving back from the airport during one of our business

Trang 18

trips; also, we have shared the foundation of goHDR, which has been other important activity, and we start progressively to see the results day

an-by day Kurt Debattista and Francesco Banterle are two excellent men

of science, and from them I have learned many things At the WarwickDigital Laboratory, I have had the possibility to share several professionalmoments with young researchers; thanks to Vedad, Carlo, Jass, Tom, Pi-otr, Alena, Silvester, Vibhor, and Elmedin as well as many collaboratorssuch as Sumanta N Pattanaik, Mateu Sbert, Karol Myszkowski, Attila andLaszlo Neumann, and Yiorgos Chrusanthou I would like to thank with all

my heart my mother, Franca, and grandmother Nella, who are always in

my mind Grateful thanks to my father, Sincero, and brothers, Marco andGiancarlo, as well as my fianc´e, Despo; they have always supported mywork Every line of this book, and every second I spent in writing it, isdedicated to all of them

—Alessandro Artusi

First, I am very grateful to the three coauthors whose hard work has madethis book possible I would like to thank my PhD students who are al-ways willing to help and offer good, sound technical advice: Vibhor Aggar-wal, Tom Bashford-Rogers, Keith Bugeja, Piotr Dubla, Sandro Spina, andElmedin Selmanovic I would also like to thank the following colleagues,many of whom have been an inspiration and with whom it has been apleasure working over the past few years at Bristol and Warwick: MattAranha, Kadi Bouatouch, Kirsten Cater, Joe Cordina, Gabriela Czanner,Silvester Czanner, Sara de Freitas, Gavin Ellis, Jassim Happa, Carlo Har-vey, Vedad Hulusic, Richard Gillibrand, Patrick Ledda, Pete Longhurst,Fotis Liarokapis, Cheng-Hung (Roger) Lo, Georgia Mastoropoulou, Anto-nis Petroutsos, Alberto Proenca, Belma Ramic-Brkic, Selma Rizvic, LuisPaulo Santos, Simon Scarle, Veronica Sundstedt, Kevin Vella, Greg Ward,and Xiaohui (Cathy) Yang My parents have always supported me and Iwill be eternally grateful My grandparents were an inspiration and aresorely missed—they will never be forgotten Finally, I would like to whole-heartedly thank my wife, Anna, for her love and support and Alex, whohas made our lives complete

Trang 19

Seetzen, Gerhard Bonnet, and Greg Ward; together with the growing body

of work from around the world, it has taken HDR from a niche researcharea into general use HDR now stands at the cusp of a step change inmedia technology, analogous to the change from black and white to color

In the not-too-distant future, capturing and displaying real-world lightingwill be the norm, with an HDR television in every home Many excitingnew research and commercial opportunities will present themselves, withnew companies appearing, such as our own goHDR, as the world embracesHDR en masse In addition to all my groups over the years, I would like tothank Professor Lord Battacharrya and WMG, University of Warwick forhaving the foresight to establish Visualisation as one of the key researchareas within their new Digital Laboratory Together with Advantage WestMidlands, they provided the opportunity that led to the development, withSpheron VR, of the world’s first true HDR video camera Christopher Moir,Ederyn Williams, Mike Atkins, Richard Jephcott, Keith Bowen FRS, andHuw Bowen share the vision of goHDR, and their enthusiasm and experi-ence are making this a success I would also like to thank the EurographicsRendering Symposium and SCCG communities, which are such valuablevenues for developing research ideas, in particular Andrej Ferko, KarolMyszkowski, Kadi Bouatouch, Max Bessa, Luis Paulo dos Santos, MichiWimmer, Anders Ynnerman, Jonas Unger, and Alex Wilkie Finally, thankyou to Eva, Erika, Andrea, and Thomas for all their love and support

—Alan Chalmers

Trang 20

Introduction

The computer graphics and related industries, in particular those involvedwith films, games, simulation, virtual reality, and military applications,continue to demand more realistic images displayed on a computer, that

is, synthesized images that more accurately match the real scene they areintended to represent This is particularly challenging when considering im-ages of the natural world that present our visual system with a wide range

of colors and intensities A starlit night has an average luminance level ofaround 10−3 cd/m2, and daylight scenes are close to 106cd/m2 Humanscan see detail in regions that vary by 1:104 at any given eye adaptationlevel With the possible exception of cinema, there has been little pushfor achieving greater dynamic range in the image capture stage, becausecommon displays and viewing environments limit the range of what can bepresented to about two orders of magnitude between minimum and max-imum luminance A well-designed cathode ray tube (CRT) monitor may

do slightly better than this in a darkened room, but the maximum displayluminance is only around 100 cd/m2, and in the case of LCD display themaximum luminance may reach 300–400 cd/m2, which does not even begin

to approach daylight levels A high-quality xenon film projector may get

a few times brighter than this, but it is still two orders of magnitude awayfrom the optimal light level for human acuity and color perception This isnow all changing with high dynamic range (HDR) imagery and novel cap-ture and display HDR technologies, offering a step-change in traditionalimaging approaches

In the last two decades, HDR imaging has revolutionized the field ofcomputer graphics and other areas such as photography, virtual reality,visual effects, and the video game industry Real-world lighting can now

be captured, stored, transmitted, and fully utilized for various applications

1

Trang 21

(a) (b) (c)

Figure 1.1. Different exposures of the same scene that allow the capture of(a) very bright and (b) dark areas and (c) the corresponding HDR image infalse colors

without the need to linearize the signal and deal with clamped values Thevery dark and bright areas of a scene can be recorded at the same time onto

an image or a video, avoiding under-exposed and over-exposed areas (seeFigure 1.1) Traditional imaging methods, on the other hand, do not usephysical values and typically are constrained by limitations in technologythat could only handle 8 bits per color channel per pixel Such imagery(8 bits or less per color channel) is known as low dynamic range (LDR)imagery

The importance of recording light is comparable to the introduction ofcolor photography An HDR image may be generated by capturing multipleimages of the same scene at different exposure levels and merging them toreconstruct the original dynamic range of the captured scene There areseveral algorithms for merging LDR images; Debevec and Malik’s method[50] is an example of this An example of a commercial implementation isthe Spheron HDR VR [192] that can capture still spherical images with adynamic range of 6× 107: 1 Although information could be recorded inone shot using native HDR CCDs, problems of low sensor noise typicallyoccur at high resolution

HDR images/videos may occupy four times the amount of memory quired by corresponding LDR image content This is because in HDRimages, light values are stored using three floating point numbers Thishas a major effect not only on storing and transmitting HDR data butalso in terms of processing it As a consequence, efficient representations

re-of the floating point numbers have been developed for HDR imaging, andmany classic compression algorithms such as JPEG and MPEG have beenextended to handle HDR images and videos

Once HDR content has been efficiently captured and stored, it can beutilized for a variety of applications One popular application is the re-lighting of synthetic or real objects The HDR data stores detailed lightinginformation of an environment This information can be exploited for de-

Trang 22

5.8e−01 1.5e+00 2.9e+00 5.2e+00 8.8e+00

(a)

Figure 1.2. A relighting example (a) A spherical HDR image in false color.(b) Light sources extracted from it (c) A relit Stanford’s Happy Buddha model[78] using those extracted light sources

tecting light sources and using them for relighting objects (see Figure 1.2).Such relighting is very useful in many fields such as augmented reality, vi-sual effects, and computer graphics This is because the appearance of theimage is transferred onto the relit objects

Another important application is to capture samples of the tional reflectance distribution function (BRDF), which describes how lightinteracts with a given material These samples can be used to recon-struct the BRDF HDR data is required for an accurate reconstruction (see

Figure 1.3. An example of capturing samples of a BRDF (a) A tone mappedHDR image showing a sample of the BRDF from a Parthenon’s block [199].(b) The reconstructed materials in (a) from 80 samples for each of three expo-sures (Images are courtesy of Paul Debevec [199].)

Trang 23

(a) (b)

Figure 1.4.An example of HDR visualization on an LDR monitor (a) An HDRimage in false color (b) The image in (a) has been processed to visualize details

in bright and dark areas This process is called tone mapping

Figure 1.3) Moreover, all fields that use LDR imaging can benefit fromHDR imaging For example, disparity calculations in computer vision can

be improved in challenging scenes with bright light sources This is becauseinformation in the light sources is not clamped; therefore, disparity can becomputed for light sources and reflective objects with higher precision thanusing clamped values

Once HDR content is obtained, it needs to be visualized HDR ages/videos do not typically fit the dynamic range of classic LDR displayssuch as CRT or LCD monitors, which is around 200 : 1 Therefore, whenusing such displays, the HDR content has to be processed by compressingthe dynamic range This operation is called tone mapping (see Figure 1.4).Recently, monitors that can natively visualize HDR content have been pro-posed by Seetzeen et al [190] and are now starting to appear commercially

im-1.1 Light, Human Vision, and Color Spaces

This section introduces basic concepts of visible light and units for ing it, the human visual system (HVS) focusing on the eye, and color spaces.These concepts are very important in HDR imaging as they encapsulatethe physical-real values of light, from very dark values (i.e., 10−3 cd/m2)

measur-to very bright ones (i.e., 106 cd/m2) Moreover, the perception of a scene

by the HVS depends greatly on the lighting conditions

Visible light is a form of radiant energy that travels in space, interactingwith materials where it can be absorbed, refracted, reflected, and trans-

Trang 24

to the material’s properties There are two main kinds of reflections: specularand diffuse (b) Specular reflections: a ray is reflected in a particular direction.(c) Diffuse reflections: a ray is reflected in a random direction.

mitted (see Figure 1.5) Traveling light can reach human eyes, stimulatingthem to produce visual sensations depending on the wavelength (see Fig-ure 1.6)

Radiometry and Photometry define how to measure light and its unitsover time, space, and direction While the former measures physical units,the latter takes into account the human eye, where spectral values are

weighted by the spectral responses of a standard observer (x, y and z

curves) Radiometry and Photometry units were standardized by the mission Internationale de l’Eclairage (CIE) [38] The main radiometricunits are:

Com-• Radiant energye) This is the basic unit for light It is measured

Trang 25

Figure 1.6. The electromagnetic spectrum The visible light has a very limitedspectrum between 400 nm and 700 nm.

• Radiance (L e = d2P e

dA e cos θdω) Radiance is the amount of RadiantPower arriving/leaving at a point in a particular direction It ismeasured in watts per steradian per square meter (W× sr −1 × m −2).

The main photometric units are:

• Luminous power (P v) Luminous Power is the weighted RadiantPower It is measured in lumens (lm), a derived unit from candela(lm = cd× sr).

• Luminous energy (Q v) This is analogous to the Radiant Energy It

is measured in lumens per second (lm × s −1).

• Luminous intensity (I v) This is the Luminous Power per direction

It is measured in candela (cd), which is equivalent to lm × sr −1.

• Illuminance (E v) Illuminance is analogous to Irradiance It is sured in lux, which is equivalent to lm × m −2.

mea-• Luminance (L v) Luminance is the weighted Radiance It is measured

creasing the relative luminance This relative measure is called contrast.

Contrast is formally a relationship between the darkest and the brightestvalue in a scene, and it can be calculated in different ways The main con-

trast relationships are Weber Contrast (CW), Michelson Contrast (CM),

and Ratio Contrast (CR) These are defined as

CW= Lmax − Lmin

Lmin , CM=

Lmax − Lmin Lmin + Lmin

, CR=Lmax

Lmin ,

Trang 26

where Lmin and Lmax are respectively the minimum and maximum

lumi-nance values of the scene Throughout this book CR is used as contrastdefinition

The eye is an organ that gathers light onto photoreceptors, which thenconvert light into signals (see Figure 1.7) These are transmitted throughthe optical nerve to the visual cortex, an area of the brain that processesthese signals producing the perceived image This full system, which isresponsible for vision, is referred to as the human visual system (HVS) [140].Light, which enters the eye, first passes through the cornea, a transpar-ent membrane Then it enters the pupil, an aperture that is modified bythe iris, a muscular diaphragm Subsequently, light is refracted by the lensand hits photoreceptors in the retina Note that inside the eye there are twoliquids, the vitreous and aqueous humors The former fills the eye, keepingits shape and the retina against the inner wall The latter is between thecornea and the lens and maintains the intraocular pressure [140]

There are two types of photoreceptors: cones and rods The cones,numbering around 6 million, are located mostly in the fovea They aresensitive at luminance levels between 10−2cd/m2and 108cd/m2(photopicvision or daylight vision), and they are responsible for the perception of highfrequency pattern, fast motion, and colors Furthermore, color vision is due

to three types of cones: short wavelength cones, sensitive to wavelengthsaround 435 nm; middle wavelength cones, sensitive around 530 nm; andlong wavelength cones, sensitive around 580 nm The rods, numberingaround 90 million, are sensitive at luminance levels between 10−6 cd/m2and 10 cd/m2(scotopic vision or night vision) The rods are more sensitive

Figure 1.7.The human eye

Trang 27

than cones but do not provide color vision This is the reason why we areunable to discriminate between colors at low level illumination conditions.There is only one type of rod and it is located around the fovea but isabsent in it This is why high frequency patterns cannot be distinguished

at low lighting conditions The Mesopic range, where both rods and conesare active, is defined between 10−2 cd/m2 and 10 cd/m2 Note that anadaptation time is needed for passing from photopic to scotopic visionand vice versa; for more details, see [140] The rods and cones compressthe original signal, reducing the dynamic range of incoming light Thiscompression follows a sigmoid function:

R

Rmax =

I n

I n + σ n , where R is the photoreceptor response, Rmax is the maximum photore-

ceptor response, and I is the light intensity The variables σ and n are

respectively the semisaturation constant and the sensitivity control nent, which are different for cones and rods [140]

A color space is a mathematical description for representing colors, ically represented by three components called primary colors There aretwo classes of color spaces: device dependent and device independent Theformer describes the color information in relation to the technology used bythe color device to reproduce the color In the case of a computer monitor

typ-it depends on the set of primary phosphors, while in an ink-jet printer typ-itdepends on the set of primary inks A drawback of this representation is

that a color with the same coordinates such as R = 150, G = 40, B = 180

will appear different when represented on different monitors The deviceindependent class is not dependent on the characteristics of a particulardevice; in this way a color represented in such a color space always cor-responds to the same color information A typical device-dependent color

space is the RGB color space The RGB color space is a Cartesian cube

represented by three additive primaries: Red, Green, and Blue A typicalindependent color space is the CIE 1931 XYZ color space, which is for-

mally defined as the projection of a spectral power distribution I into the color-matching functions, x, y, and z:

I(λ)y(λ)dλ, Z =

 830 380

I(λ)z(λ)dλ The functions x, y, and z are plotted in Figure 1.8 Note that the XYZ color space was designed in such a way that the Y component measures

the luminance of the color The chromaticity of the color is derived from

Trang 28

– –

(a)

0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0

ity diagram This diagram shows all colors perceivable by the HVS (seeFigure 1.8(b))

A popular color space for CRT and LCD monitors is sRGB [195] Thiscolor space defines as primaries the colors red (R), green (G), and blue (B).Moreover, each color in sRGB is a linear additive combination of values in

[0, 1] of the three primaries Therefore, not all colors can be represented,

only those inside the triangle generated by the three primaries (see ure 1.8(b))

Fig-A linear relationship exists between the XYZ and RGB color spaces.RGB colors can be converted into XYZ ones using the following conversion

⎦ , M =

0.412 0.213 0.358 0.715 0.181 0.072 0.019 0.119 0.950

Furthermore, sRGB presents a nonlinear transformation for each R, G,and B channel to linearize the signal when displayed on LCD and CRTmonitors This is because there is a nonlinear relationship between the

Trang 29

Symbol Description

Lw HDR luminance value

Ld LDR luminance value

LH Logarithmic mean luminance value

Lavg Arithmetic mean luminance value

Lmax Maximum luminance value

Lmin Minimum luminance value

Table 1.1. The main symbols used for the luminance channel in HDR imageprocessing

output intensity generated by the displaying device and the input voltage.This relationship is generally approximated with a power function with

value γ = 2.2 (in case of sRGB, γ = 2.4) The linearization is achieved by

applying the inverse value:

where R v , G v , B v are respectively the red, green, and blue channels ready

for visualization This process is called gamma correction.

The RGB color space is very popular in HDR imaging However, many

computations are calculated in the luminance channel Y from XYZ, which

is usually referred to as L In addition, common statistics from this minance are often used, such as the maximum value, Lmax, the minimum

lu-one, Lmin, and the mean value This can be computed as the arithmetic

average, Lavgor the logarithmic one, LH:

where xi are the coordinates of the ith pixel, and  > 0 is a small constant

for avoiding singularities Note that in HDR imaging, subscriptsw and d(representing world luminance and display luminance, respectively) refer toHDR and LDR values The main symbols used in HDR image processing

are shown in Table 1.1 for the luminance channel L.

Trang 30

HDR Pipeline

HDR imaging is a revolution in the field of imaging allowing, as it does,the ability to use and manipulate physically-real light values This chapterintroduces the main processes of HDR imaging, which can be best charac-terized as a pipeline, termed the HDR pipeline Figure 2.1 illustrates thedistinct stages of the HDR pipeline

The first stage concerns the generation of HDR content HDR contentcan be captured in a number of ways, although limitations in hardwaretechnology, until recently, have meant that HDR content capture has typi-cally required the assistance of software Section 2.1 outlines different ways

in which HDR images can be generated These include images generatedfrom a series of still LDR images, using computer graphics, and via expan-sion from single-exposure images The section also describes exciting newhardware that enables native HDR capture

Due to the explicit nature of high dynamic range values, HDR contentmay be considerably larger than its LDR counterpart To make HDRmanageable, efficient storage methods are necessary In Section 2.2 HDRfile formats are introduced Compression methods can also be applied

at this stage HDR compression methods will be discussed in detail inChapter 7

Finally, HDR content can be natively visualized using a number of newdisplay technologies In Section 2.3.2 we introduce the primary nativeHDR displays Such displays are still generally unavailable to the con-sumer However, software solutions can be employed to adapt HDR con-tent to be shown on LDR displays while attempting to maintain an HDRviewing experience Such software solutions take the form of operatorsthat compress the range of luminance in the HDR images to the lumi-nance range of the LDR display These operators are termed tone mappers

11

Trang 31

Figure 2.1. The HDR pipeline in all its stages Multiple exposure images arecaptured and combined, obtaining an HDR image Then this image is quantized,compressed, and stored Further processing can be applied to the image Forexample, areas of high luminance can be extracted and used to relight a syntheticobject Finally, the HDR image or a tone mapped HDR image can be visualizedusing native HDR monitors or traditional LDR display technologies.

and a large variety of tone mapping operators exist We will discuss tonemapping in detail in Chapter 3

2.1 HDR Content Generation

In this book we will consider four methods of generating HDR content.The first, and most commonly used until recently, is the generation ofHDR content by combining a number of LDR captures at different expo-sures through the use of software technology The second, which is likely

to become more feasible in the near future, is the direct capture of HDRimages using specialized hardware The third method, popular in the enter-tainment industries, is the creation of HDR content from virtual environ-ments using physically based renderers The final method is the generation

of HDR content from legacy content consisting of single exposure tures, using software technology to expand the dynamic range of the LDRcontent

At the time of writing, available consumer cameras are limited since theycan only capture 8-bit images or 12-bit images in RAW format This does

Trang 32

(a) (b) (c) (d) (e)

Lux 2.0e+00 7.7e+00 2.5e+01 7.5e+01 2.2e+02

not cover the full dynamic range of irradiance values in most environments

in the real world The most commonly used method of capturing HDRimages is to take multiple single-exposure images of the same scene tocapture details from the darkest to the brightest areas as proposed byMann and Picard [131] (see Figure 2.2 for an example) If the camera has

a linear response, the radiance values stored in each exposure for each color

channel can be combined to recover the irradiance, E, as

where I i is the image at the ith exposure, Δt i is the exposure time for

I i , N e is the number of images at different exposures, and w(I i(x)) is a

weighting function that removes outliers For example, high values in one

of the exposures will have less noise than low values On the other hand,high values can be saturated, so middle values can be more reliable Anexample of a recovered irradiance map using Equation (2.1) can be seen inFigure 2.2(f)

Unfortunately, film and digital cameras do not have a linear response

but a more general function f , called the camera response function (CRF).

The CRF attempts to compress as much of the dynamic range of the realworld as possible into the limited 8-bit storage or into film medium Mann

and Picard [131] proposed a simple method for calculating f , which consists

of fitting the values of pixels at different exposure to a fixed f (x) = ax γ + b This parametric f is very limited and does not support most real CRFs.

Debevec and Malik [50] proposed a simple method for recovering a CRF.For the sake of clarity this method and others will be presented for graychannel images The value of a pixel in an image is given by the application

Trang 33

of a CRF to the irradiance scaled by the exposure time:

I(x) = f (E(x)Δt i ).

Rearranging terms and applying a logarithm to both sides we obtain

log(f −1 (I(x))) = log E

where g = f −1 is the inverse of the CRF, M is the number of pixels used

in the minimization, and Tmaxand Tminare respectively the maximum and

minimum integer values in all images I i The second part of Equation (2.3)

is a smoothing term for removing noise, where function w is defined as

w(x) =



x − Tmin if x ≤ 1

2(Tmax+ Tmin), Tmax − x if x > 1

2(Tmax+ Tmin).

Note that minimization is performed only on a subset of the M pixels,

because it is computationally expensive to evaluate for all pixels Thissubset is calculated using samples from each region of the image

Trang 34

Listing 2.1.Matlab Code: Combining multiple LDR exposures

Listing 2.1 showsMatlab code for combining multiple LDR exposuresinto a single HDR The full code is given in the file BuildHDR.m Thefunction accepts as input format, an LDR format for reading LDR im-ages The second parameter lin type outlines the linearization method

to be used, where possible options are ‘linearized’ for no linearization(for images that are already linearized on input), ‘gamma2.2’ for apply-

ing gamma function of 2.2, and ‘tabledDeb97’, which would employ the

Debevec and Malik method described above Finally, the type of weightweight type can also be input The resulting HDR image is output Afterhandling the input parameters, the function ReadLDRStack inputs the im-ages from the current directory The code block in the case statement case

‘tabledDeb97’ handles the linearization using the Debevec and Malik’smethod outlined previously Finally, CombineLDR.m combines the stackusing the appropriate weighting function

Mitsunaga and Nayar [149] improved Debevec and Malik’s algorithm

with a more robust method based on a polynomial representation of f

They claim that any response function can be modeled using a high-orderpolynomial:

At this point the calibration process can be reduced to the estimation

of the polynomial order P and the coefficients c Taking two images of a

Trang 35

scene with two different exposure times Δt1 and Δt2, the ratio R can be

The brightness measurement I i(x) produced by an imaging system is

related to scene radiance E(xΔt i ) at time i via a response function I i(x) =

f (E(xΔt i )) From this, I i (x) can be rewritten as E(xΔt i ) = g(I i(x)) where

g = f −1 Since the response function of an imaging system is related to

the exposure ratio, the Equation (2.4) can be rewritten as

R1,2(x) = I1(x)

I2(x)=

P k=0 c k I1(x)k

P k=0 c k I2(x)k , (2.5)

where the images are ordered in a way that Δt1 < Δt2 so as R ∈ (0, 1) The number of f − R pairs that satisfy the Equation (2.5) is infinite This

ambiguity is alleviated by the use of the polynomial model The responsefunction can be recovered by formulating an error function such as

To reduce searching, when the number of images is high (more than nine),

an iterative scheme is used In this case, the current ratio at the kth step

is used to evaluate the coefficients at the k + 1th step.

Robertson et al [184, 185] proposed a method that estimates the

un-known response function as well as the irradiance E(x) through the use

of the maximum likelihood approach, where the objective function to beminimized is

where w is a weight defined by a Gaussian function, which represents the

noise in the imaging system used to capture the images Note that allthe presented methods for recovering the CRF can be extended to coloredimages applying each method separately for each color band

Trang 36

The multiple exposure methods assume that images are perfectly aligned,there are no moving objects, and CCD noise is not a problem These arevery rare conditions when real-world images are captured These problemscan be minimized by adapting classic alignment, ghost, and noise removaltechniques from image processing and computer vision (see [12, 71, 94, 98]).HDR videos can be captured using still images, with techniques such

as stop-motion or time-lapse Under controlled conditions, these methodsmay provide good results with the obvious limitations that stop-motion andtime-lapse entail Kang et al [96] extended the multiple exposure methodsused for images to be used for videos Kang et al.’s basic concept is to have aprogrammed video camera that temporally varies the shutter speed at eachframe The final video is generated aligning and warping different frames,combining two frames into an HDR one However, the frame rate of thismethod is low—around 15 fps—and the scene can only contain slow-movingobjects; otherwise artifacts will appear The method is thus not well suitedfor real-world situations Nayar and Branzoi [153] developed an adaptivedynamic range camera where a controllable liquid crystal light modulator

is placed in front of the camera This modulator adapts the exposure ofeach pixel on the image detector, allowing the capture of scenes with a verylarge dynamic range Finally, another method for capturing HDR videos

is to capture multivideos at different exposures using several LDR videocameras with a light beam splitter [9] Recently, E3D Creative LLC appliedthe beam splitter’s technique in the professional field of cinematographyusing a rig for stereo using two Red One video cameras [125] This allowsone to capture high definition video streams in HDR

A few companies provide HDR cameras based on automatic multiple posure capturing The three main cameras are SpheronCam HDR bySpheronVR [192], Panoscan MK-3 by Panoscan Ltd [164], and Civetta

ex-360 by Weiss AG [229] These are full ex-360-degree panoramic cameras withhigh resolution The cameras can capture full HDR images; see Table 2.1for comparisons

Device Dynamic Range Max Resolution Max Capturing

Trang 37

These cameras are rather expensive (on average more than $35, 000)

and designed for commercial use only The development of these particularcameras was mainly due to the necessity of quickly capturing HDR imagesfor use in image-based lighting (see Chapter 5), which is extensively used

in applications, including visual effects, computer graphics, automotive sign, and product advertising More recently, camera manufactures such

de-as Canon, Nikon, Sony, Sigma, etc have introduced in consumer or DSLRcameras some HDR capturing features such as multiexposure capturing orautomatic exposure bracketing and automatic exposure merging

The alternative to multiple exposure techniques is to use CCD sensorsthat can natively capture HDR values In recent years, CCDs that recordinto 10/12-bit channels in the logarithmic domain have been introduced

by many companies, such as Cypress Semiconductor [45], Omron [160],PTGrey [176], and Neuricam [155] The main problem with these sensors isthat they use low resolutions (640× 480) and can be very noisy Therefore,

their applications are mainly oriented towards security and automatization

in factories

A number of companies have proposed high quality solutions for the tertainment industry These are the Viper camera by Thomson GV [200];Red One, Red Scarlet, and Red Epic camera by Red Digital Cinema Cam-era Company [179]; the Phantom HD camera by Vision Research [211]; andGenesis by Panavision [163] All these video cameras present high framerates, low noise, full HD (1920× 1080) or 4K resolution (4096 × 3072), and

en-Lux

4.9e−01 1.3e+00 2.9e+00 7.3e+00 1.0e+02

Figure 2.3. An example of a frame of the HDR video camera of Unger andGustavson [205] (a) A false color image of the frame (b) A tone mappedversion of (a)

Trang 38

Lux 5.1e−01 1.2e+00 2.5e+00 5.5e+00 2.0e+01

Figure 2.4.An example of a frame of the HDR video camera of SpheronVR (a) Afalse color image of the frame (b) A tone mapped version of (a) (Image courtesy

of Jassim Happa and the Visualization Group, WMG, University of Warwick.)

a good dynamic range, 10/12/16-bit per channel in the logarithmic/lineardomain However, they are extremely expensive and they do not capturethe full dynamic range that can be seen by the HVS at any one time

In 2007, Unger and Gustavson [205] presented an HDR video camera forresearch purposes (see Figure 2.3) It is capable of capturing high dynamicrange content at 512× 896 resolution, 25 fps, and a dynamic range of

1,000,000 to 1 The main disadvantage is that the video camera uses threeseparate CCD sensors, one for each of the three color primaries (RGB), and

it has the problem that for rapid scene motion, artifacts such as motion blurmay appear In addition, due to the limitations of the internal antireflexcoating in the lens, system flare and glare artifacts can also appear

In 2009, SpheronVR, in collaboration with the University of Warwick[33], developed an HDR video camera capable of capturing high dynamicrange content at 1920×1080 resolution, 30–50 fps, and a 20 f-stops dynamic

range (see Figure 2.4) The HDR video data stream is initially recorded on

an HDD array A postprocessing engine transforms it to a sequence of HDRfiles (typically OpenEXR), taking lens vignetting, spherical distortion, andchromatic aberration into account

Computer graphics rendering methods are another common method of erating HDR content Frequently, this can be augmented by photographicmethods

gen-Digital image synthesis is the process of rendering images from tual scenes composed of formally defined geometric objects, materials, andlighting, all captured from the perspective of a virtual camera Two mainalgorithms are usually employed for rendering: ray tracing and rasteriza-tion (see Figure 2.5)

Trang 39

Ray tracing Ray tracing [232] models the geometric properties of light by

calculating the interactions of groups of photons, termed rays, with

geom-etry This technique can reproduce complex visual effects without muchmodification to the traditional algorithm Rays are shot from the virtualcamera and traverse the scene until the closest object is hit (see Figure 2.6)

Figure 2.6. Ray tracing For each pixel in the image, a primary ray is shotthrough the camera into the scene As soon as it hits a primitive, the lighting forthe hit point is evaluated This is achieved by shooting more rays For example,

a ray towards the light is shot in the evaluation of lighting A similar process isrepeated for reflection, refractions, and interreflections

Trang 40

Here the material properties of the object at that point are used to calculatethe illumination, and a ray is shot towards any light sources to account forshadow visibility The material properties at the intersection point furtherdictate whether more rays need to be shot in the environment and in whichdirection; the process is computed recursively Due to its recursive nature,ray tracing and extensions of the basic algorithm, such as path tracingand distributed ray tracing, are naturally suited to solving the renderingequation [95], which describes the transport of light within an environment.Ray tracing methods can thus simulate effects such as shadows, reflections,refractions, indirect lighting, subsurface scattering, caustics, motion blur,indirect lighting, and others in a straightforward manner.

While ray tracing is computationally expensive, recent algorithmic andhardware advances are making it possible to compute it at interactive ratesfor dynamic scenes [212]

Rasterization Rasterization uses a different approach than ray tracing forrendering The main concept is to project each primitive of the scene

on the screen (frame buffer) and discretize it into fragments, which arethen rasterized onto the final image When a primitive is projected anddiscretized, visibility has to be solved to have a correct visualization and toavoid incorrect overlap between objects For this task, the Z-buffer [32] isgenerally used The Z-buffer is an image of the same size as the frame bufferthat stores depth values of previous solved fragments For each fragment at

a position x, its depth value, F (x) z, is tested against the stored one in the

Z-buffer, Z(x) z If F (x) z < Z(x) z, the new fragment is written in the frame

buffer, and F (x) z is placed in the Z-buffer After the depth test, lighting

is evaluated for all fragments However, shadows, reflections, refractions,and interreflections cannot be handled natively with this process since raysare not shot These effects are often emulated by rendering the scene fromdifferent positions For example, shadows can be emulated by calculating

a Z-buffer from the light source position and applying a depth test duringshading to determine if the point is in shadow or not This method isknown as shadow mapping [234]

The main advantage of rasterization is that it is supported by rent graphics hardware, which allows high performances in terms of drawnprimitives Such performance is achieved since it is straightforward to par-allelize rasterization: fragments are coherent and independent, and datastructures are easy to update Finally, the whole process can be organizedinto a pipeline Nevertheless, the emulation of physically based light trans-port effects (i.e., shadows, reflections/refractions, etc.) is not as accurate

cur-as ray tracing and is bicur-ased in many ccur-ases For more detail on rcur-asterization,see [10]

... developed an HDR video camera capable of capturing high dynamicrange content at 1920×1080 resolution, 30–50 fps, and a 20 f-stops dynamic< /i>

range (see Figure 2.4) The HDR video data stream... Warwick.)

a good dynamic range, 10/12/16-bit per channel in the logarithmic/lineardomain However, they are extremely expensive and they not capturethe full dynamic range that can be seen... forresearch purposes (see Figure 2.3) It is capable of capturing high dynamicrange content at 512× 896 resolution, 25 fps, and a dynamic range of

1,000,000 to The main disadvantage is

Ngày đăng: 05/06/2014, 11:56

TỪ KHÓA LIÊN QUAN

w