(BQ) Computer animation and graphics–once rare, complicated, and comparatively expensive–are now prevalent in everyday life from the computer screen to the movie screen. This book is suitable for undergraduate students in computer science and engineering, for students in other disciplines who have good programming skills, and for professionals.
Trang 1-, -V / , \302\273,< ,\\ ! - ' > i vi - , ;i
'-'-,'' '-\342\200\242- , :\342\200\242'fi>.- :-\342\200\242
, - l l -\342\200\242 1\342\200\242'\\\\\342\200\242\342\200\242*,, l
-In
developing the graphics pipeline, we assumed that each box in the pipeline
has afixed functionality Consequently, when wewanted to use light-material
interactions to determinethe colors of an object, we werelimited to the modified
Phong model becauseit was the only lighting model supported by the fixed-function
pipeline in the OpenGL specification and, until recently, the only model supported
by mo st hardware Inaddition to having only onehghting model available, lighting
calculations were doneonlyfor each vertex The resulting vertex colorswerethen
in-terpolated over theprimitive by thefixed-functionf r ag m ent processor If w ewanted
to use some otherlighting or shading model, we hadto resort to an off-line renderer.
Overthe past few years, graphics processorshave changed dramatically Both
the vertex processorand fragment processor are now user programmable.We can
writeprograms called vertex shaders and fragment shaders to achieve complexvisual
effects at the same rate asthe standardfixed-functionp i p e lin e
In this chapter, we introduce the concept ofprogrammable shaders First, we
review some ofthe efforts to develop languages to describe shaders.These efforts
culminated in the OpenGLShadingLanguage (GLSL),which is now a standard part
of OpenGL We t hen use GLSL t o develop avariety of vertex shaders that compute
vertex properties,including their positions and colors.Finally, we develop fragment
shadersthat let us program the calculations performed on each fragment and
ulti-mately determine the color ofeach pixel Our discussion offragment shaders will
alsointroduce many newways of using texture mapping.
When we developed the Phong and modified Phong hghting models in Chapter 6,
we placedgreat emphasis on their efficiency but much less onhow well they modeled
physical light-material interactions Themodified Phong model is remarkable in
that while it has avery loose coupling with physical reality, it yields images that
are adequatefor most purposes Hence, asgraphics hardware and especially pipeline
architectures developed, it was natural that this model was built intothe specification
451
Trang 2Themodified Phong model is adequate for simulating smooth surfaces, such
asplastic or metal, which have an isotropic BDRF That is, the properties of the
material arethe same in all directions Often, however, we need a more physically
realistic model th at can modelmaterials with asymmetric (anisotropic) BDRFsthatcharacterize many real world materials, including skin, fabrics, and fluids.When we work with translucent materials, wewan t to incorporate refraction, the bending of light asit passes through materials with different properties Wemi ght also want to accountfor the fact that howlight isbentbyrefraction is afunction ofth e wavelength
ofthe light.
In other situations, wewant nonphotorealistic effects For example, wemightwant to simulate thebrush strokes ofa painter or wemight want to create cartoonlikeshading effects Many ofth eseeffects can be achieved onlybyworking with fragments
in ways that arenot possible-with afixed-functionp i p eline For example, in bumpmapping, a topic that we considerin Section 9.12, we change the normal for each
fragment to give the appearance of a surfacewith great complexity that appears correct aseither the lights or the surface move Many of thesealgorithms and effects were developed morethan 20 years ago but wereonly available through non-real-
time Tenderers,such as RenderMan Recent advancesin graphics architectures have
alteredthis situation dramatically.
Consider the pipeline architecture illustrated in Figure 9.1 Thisfigure
rep-resents the same architecture we have been discussing since Chapter 1.First, we process vertices Note that because both the model-view and projection transfor- mations are applied during vertex processing, the representation of vertices in eye coordinates occursonly within the vertex processer. At the end of vertex process-
ing, each vertex has had its loca tion transformed by the model-viewand projection matrices, usually has been assigned acolor, and depending on which options are
are then assembled into primitives that are clipped.The potentially visible
primi-tivesthat are not clipped out are rasterized,generating fragments Ultimately, the
fragments are processedto generate the final display.What has changed is that the vertex processorand thefragmentp r o c e s sorare now programmable by application- specific programs called shadersthat are compiled and loaded into the graphics
Trang 39.2 Shading Languages 45 3
9.2 SHADING LANGUAGES
Before we can develop aprogramming model for shaders, we need amethod to
de-scribe lighting modelsand shaders An approach that wasinspired by the RenderMan
shadinglang uag e is to lookat shaders as mathematical expressionsin a language that
involves variables,constants, and operations among these entities.Such expressions
can also be expressedin the form of tree data structures, and these trees can be
tra-versedby a variety of algorithms toevaluate the expression represented by the tree.
algorithms toevaluate these expressions.
9.2 1 Shade Trees
Consider the original Phong shading modelfrom Chapter 6, without the distance
term:
I =kdLdl-n +ksLs(*-y)a
+KLaIa-When agraphics system evaluates this expression, 1n and 1 are known from the
OpenGLstate, and r is computedby
r=2(1\342\200\242n)n -1.
Both of theseequations are expressions that involve both arithmetic operations,
in-ductingexponentiation, and vector operations, such asthe dot product We can
rep-resent theseequations using a tree data structure 2
as shown in Figure 9.2 Treesof
this type are known as expression trees.Variables and constants appear at the
termi-nal nodes and all internal nodes represent operations.Because all our operators are
binary, the resulting tree is abina ry tree in which all internal nodes have exacdytwo
child nodes.
Evaluating an arithmetic expression isequivalent to traversing the
correspond-ing tree, that is, visiting every node and carrying out the requisite mathematical
operations at the internal nodes.In this sense, trees and their associated traversal
expressions,such as that for the Phong shader.
The moreinteresting application of expression trees isin designing new shaders.
Given a setof variables that might beavailable in a graphics systemand operations
that are supported by the system, we can form collections oftrees; each collection
of one or more treesdefines a different method forshading a surface This approach
istaken in the RenderMan shading language Variables, such as normals and light
1.Thecomputationactuallyuses/ * max(0,1 \342\200\242n) and max(0, r\342\200\242v)wherethefactor/ is1 if1 \342\200\242nis
positive and 0 otherwise toguard against theeffects of negative dot products.
2.We consider the use oftreesin greater depth in Chapter 10.
Trang 4454 Chapter 9 Programmable Shaders
FIGURE 9.2 Expression trees,(a) For Phong shading, (b)For reflection vector.
combinedusing a set of scalar and vector operations From basic data structures,
we know that arithmetic expressionsand binary trees are equivalent Thus, wecanuse standard tree-traversal algorithms for the evaluation of expressions that define shaders Looking at shade treesslightly differently, we see that shade treescan be replacedbyprograms that can be executedbygraphicsprocessors.The programming approach to developing shaders issupported by high-level languages that we canuse
to program the graphics pipeline onthe latest graphics cards.
9.3 EXTENDING OPENGL
Before getting into the details ofprogrammable shaders, which add anew level of
has evolved to incorporate advancesin graphics hardware and software Graphics
APIs, such as OpenGL, developed as away to p rovide application programmerswithaccess to hardware featuresthat were being provided by thelatest graphics hardware.
Trang 59.3 Extending OpenGL 455
As mo re and more hardware features became available, processor speeds increased,
and as mo re memory was provided on graphics processors, more complex graphics
techniques became possible and evenroutine It was only natural that application
programmers would expect suchtechniques to be supported by graphics APIs.
Theproblem that API developers mustconfront is how to support such features
through an existingAPI while not forcing programmers to replaceexisting code One
compatible sothat older code is guaranteed torun on newer versions Anot her
ap-proach isto add optional extensions tothe API that allow application programs to
How-ever,in order to access thenew features of programmable graphics processors, anew
set of programming tools is required.
One of the mainfeatures of OpenGL is that the API has been very stable Therewere
upgradesfrom OpenGL 1.0 through OpenGL 1.5that were released over a 10-year
was released in 2004, was amajor upgrade but still retains codecompatibility with
earlier versions Thecurrent release is OpenGL2.1.Thus, any program written on an
older version of OpenGLruns as expected on alater version Changes to OpenGL
reflect advances in hardware that became common tomany graphics processors.
For example, ashardware support for texture mapping increased,newer versions
texture objects, mipmapping, and three-dimensional textures were added tothe early
versions of Open GL.
Whereas OpenGL versions represent a consensus ofmany users and hardware
providers, agiven graphics card or high-end workstation is likely to support
fea-tures that are not generallyavailable on other hardware Such is especiallythe case
program-mers who have access to aparticular piece of hardware want to be ableto access
its features through OpenGLfunctions, even though these features arenot general
enough to besupported bythelatest version ofth eAPI Onesolution to thisdilemma
is to havean extension mechanism within OpenGL.Individual hardware providers
can provide accessto (or expose) hardware features through new Open GL functions
that may only work on aparticular manufacturer's hardware OpengGL extensions
have namesthat identify the provider For example,an extension with a name such
as glCommandNV identifies it as provided by NVIDIA Other manufacturers could
then implement the same extension. Extensions that have been widely used are
3.In 2006, the ARBwa s replaced by theOpenGL Working Group underthe Kronos Consortium.
Seewww.opengl.org.
Trang 6tween registers and memory The OpenGL extensionsprovided a mechanism for loading this assembly-like codeinto a programmable pipeline Although this mech- anism allowed users towork with programmable shaders, it had all the faults of an
assemblylanguageprogram AsGPUsbecame more sophisticated, itbecame ingly more difficult to program shadersusing assemblylanguage In a manner similar
increas-to the development ofstan d ard programming languages, higher-level programming interfaces and compilersfor shader code have now replacedassembly language p r o -
gramming for most users.
9.3.2 GLSL and Cg
The two high-level shading languages of mostinterest to us are the OpenGLShadingLanguage (GLSL) a nd Cg,which is an acronym for Cfor Graphics They aresimilarbut have different targeted users.Both are based on the Cprogramming language and include most ofitsprogramming constructsbut add languagefeatures and datatypes
that make it easierto program shaders.
The main differences arise because Cg is designedtosupport shaders that are
portable acrossmultiple platforms, including OpenGL and Microsoft's DirectX Cg
isvirtually identical to Microsoft's High LevelShading Language (HLSL) and thus hasthe advantage for Windows developersthat it allows them to developshadersforboth DirectX and OpenGLat the same time However,the interface between OpenGL
and Cgshader s is m oresophisticated than the interface between OpenGLand GLSL
is GLSL ispart of OpenGL 2.0 and thus is supported by multiple vendors and on multipleplatforms BecauseGLSLispart ofOpenGL, itis simpler todevelopOpenGL shaderswith GLSL than with Cg Hence, wewill focus on GLSL, understanding thatthe two approaches havefar more similarities tha n differences.
9.4 THE OPENGLSHADING LANGUAGE
towrite both vertex and fragment shaders.It is incorporated into OpenGL 2.0 In
frag-ment program, although the two are used in different contexts Before we inethe GLSL lan guag e,first we examine the different tasksthat these shaders must
exam-perform.
Trang 79.4 The OpenGLShading Language 4 5 7
9.4.1 Vertex Shaders
A vertex program, or vertex shader, replacesthe fixed-function operations
per-formed by the vertex processor with operations defined in the shader If avertex
shader is not provided by the application, a programmable vertex processor carries
out the standard operations ofthe OpenGLfixed-functionv e rt ex processor.A vertex
shader is executed on eachvertex as i t passesdown the pipeline Every vertex shader
must output the information that the rasterizer needs to do its j ob At a minimum,
every vertexshader must output a vertexposition for the rasterizer For eachvertex,
the input to thevertex program can use the vertexposition defined by the
applica-tion program and most ofth e information that isin the OpenGL state, including the
current color, te xture coordinates, ma terial properties, and transformation matrices.
In addition, the application program can p ass otherapplication-specific information
on a per-vertex basistothe vertex shader.
There are afew operations that virtually every vertex program must carry out.
Recall that most application programs specify vertex positions in object space, and
the vertex processortransforms these positions first by the model-viewmatrix into
eye coordinates and then by the projection matrix into clip coordinates Because
an application-supplied vertex shader replacesthefixed-functionv e r t ex operations,
oneof the jobs that almost all vertex programs must carry out is to transform the
input vertex position from object coordinatesto clip coordinates.Because the vertex
program can accessthe OpenGL state, it has accesstothe standard transformation
matrices orit can compute its owntransformations
Here is a simple,but complete, vertex program:
/ * pass through vertex shader */
void main(void)
{
gl_Position =gl_ProjectionMatrix*(gl_ModelViewMatrix*gl_Vertex);
}
This shader simply takes each vertex'sposition (gl_Vertex) in object coordinates
and multiplies it first by the model-viewmatrix and then by the projection matrix
to obtain theposition (gl_Position) in clip coordinates Thefour variables in the
shader areall part of the OpenGL state and thus do not have to be declaredin the
shader Each execution of g lV erte x in an application triggers the execution of the
shader This shader is sosimple that it does not even set acolor or any other vertex
attribute, leaving such matters tothe fragment processor Because this shader does
nothing but send on theposition of the vertex, it is sometimes calledapass-through
shader.
A slightly more complexversi on that also assigns a red colorto each vertex is as
follows:
Trang 8458 C h a pte r 9 Programmable Shaders
/* simple vertex shader */
const vec4 red = vec4(1.0, 0.0, 0.0, 1.0);/*C++ style constructor */
void main(void) {
gl_Position = gl_ModelViewProjectionMatrx*gl_Vertex;
gl_FrontColor = r ed;
} This program doestwo things It transforms the input vertex position (gl_
Vertex) by the concatenation of the projection and model-view matrices to anewposition in clip coordinates (gl_Position) and colors each vertex red 4 Because
the conversion of a vertex'sposition from object to clip coordinatesis socommon,GLSL provides the precomputed product of the projection and model-viewmatricesthroughthebuilt-in variablegl_ModelViewProjectionMatrix Notethatnames that beginwith gl_ refer to variablesthat are part of the OpenGLstate Supposethatthe application program that usesthis vertex shader contains the code
where the vertices vO, v l ,and v2 have been defined previously in the application
program. Each execution of glV erte x invokes our shader with a new value of gl_Vertex, which is the internal four-dimensional representation of the vertex inglVertex Each execution ofthe vertex program outputs a color (gl_FrontColor)
and a new vertex position (gl_Posit ion)that are passed on forprimitive assembly.
This process isillustrated in Figure 9.3 In this shader, we couldhave computed a color sothat each vertex could be assigned adifferent color We also could have setthe color in the application for each vertex using g lCol or and had the shader pass
on these colorsto the rasterizer by usingthe code gl_FrontColor = gl_Color;
in the shader Note that avertex shader has a main function designation and can call
otherfunctions written in GLSL The sameistr u e forfragment shaders.
the model-view andprojection matrices areoftype mat4 , whereas thevertex position isa vec4datatype The multiplication operator * is overloaded sothat matrix-vector
4 We areassuming that two-sided lighting has not been enabled.If we need two-sidedlighting
because back faces are visible and materials mighthave different front and back properties, then our shadershould also compute gl_BackColor.
Trang 99.4 The OpenGLShading Language 4 5 9
-ApplicationglColor , program
FIGURE 9.3 Vertex shader architecture.
multiplications are defined as wewould expect Hence, the codein our simple vertex
shader
gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
yieldsthefinalpositionthatisavec4datatype.
Until now, we have used the terms column matrix and row matrix rather than
vector so asnot to confuse the vector geometrictype with its representation using
row and column matrices GLSLdefines a vector data typethat is a one-dimensional
C-stylearray GLSL reserves the matrix data type for square matrices that are
two-dimensional C-stylearrays We will use th e GLSLterminology in this chapter.
In our example, weusedbuilt-in variablesgl_FrontColor andgl_Posit ion
forthe output of the vertex program Thefront colorisinterpolated by the rasterizer
and the interpolated colors are available toeither a fragment program orthe
fixed-function fragment processor Theposition for each vertex is usedby the rasterizer to
assign aposition for each fragment it produces Note that the red color has a const
qualifier and will be the samefor each invocation of the program.In general, the
output position and color can change with each invocation of the vertex program.
Fragment shaders written in GLSL havethe same syntax as vertex programs
How-ever, fragment programs are executed after the rasterizer and thus operate on each
fragment rather than on each vertex.
assembly and clipping stages Ifthe vertex is not eliminated by clipping, it goes on
to primitive assembly and then to the rasterizer, which generates fragments that
are then processed by the fragment processor using either fixed-function fragment
processing or an application-defined fragment program, or fragment shader. Vertex
attributes, such as vertex colorsand positions, are interpolated by the rasterizer
across a primitive to generatethe corresponding fragment attributes In the simplest
Trang 10FIGURE 9.4 Fragment shader architecture.
case, the fragment processor uses theseattributes without modification A minimal
fragment program is asfollows:
/* pass-through fragment shader */
void mainO
\342\200\242Cgl_FragColor\342\200\242gl_Color;
}
As trivial asthis program appears, there is amajor difference between how it and
avertex program function The values of gl _Colorare not the values produced bythe vertex program Rather,the values of gl_Color in the fragment program have been produced by the rasterizer interpolating the vertex values of gl_FrontColor
(and gl_BackColor iftwo-sided lighting is enabled)from thevertex processor over
the primitive to produce the values usedin the fragment program Thus, unlessallthe vertex colors areidentical, each tim e that thefragment program executes, it uses a
different value ofg l _ C olor The fragment color produced by thefragment program
is then used tomodify the color of a pixelin theframe buffer Thisprocess isshown in Figure 9.4 Wenow present the main features of GLSLand then develop some moresophisticated shaders.
The OpenGL Shading Language is based onthe C programming language GLSL hassimilar naming conventions, data types, and control structures However, because vertex and fragment shaders execute in avery different environment than normal programs,including the OpenGL application that invokes them, there are some
significant differences from C.
Trang 119.5 The OpenGLShadingLanguage 4 6 1
flow down the pipeline that can change the display.However, once we execute any
glVertex function\342\200\224or any OpenGL function that generatesvertices\342\200\224the pipeline
goes into action. With the fixed-function pipeline, various calculations are made
in the pipeline to determine ifthe primitive to which the vertex belongs is visible
and,if i t is, to colorthe corresponding pixels in the framebuffer These calculations
generally do not change the OpenGL state. Subsequent executions of glV erte x
invoke the samecalculations but use updated values ofstate variables if any state
changes have been madein between the calls to glVertex.
implement the vertex processor,we seethat there must be multiple types of variables
involved in the execution ofthis code Some variables will change as the output
vertexattributes, such as the vertex color, arecomputed Others, whose values are
determined by the OpenGL state, cannot bechanged by the calculation Mostinternal
variables must be initialized to their original values after each vertex is processed
sothat the calculation for the next vertex will be identical Other variables must be
computed and passed onto the next stage of the pipeline. Consequently, when we
substitute auser-written vertex program for thefixed-functionv erte x processor, we
mustbeableto identify thesedifferent typesofvariables.
A fragment program also hasinput variables, internal variables, and output
variables The major differencebetween a fragment program and avertex program
is that afragment program will be executedfor each fragment Some valueswill be
provided by the OpenGLstate and thus cannot bechanged in the shader Otherswill
be changed only on afragment-by-fragment basis Most shader variables must be
initialized for each fragment.
A typical application not only will change state variables and entities such as
texture maps,but also may use multiple vertexand fragment shaders Hence, we also
must examine how t o loadin shaders, how t olink them into an GLSL program,and
how to set this GLSLprogram to be the one that OpenGL uses This process isnot
part of the GLSLlanguage but is accomplished using a set offunctions that are now
part ofthe OpenGL core.
Because eachvertex triggers an execution ofthe current vertex shader
indepen-dent of any other vertex and, likewise, eachfragment triggers the execution ofthe
current fragment shader, there isthe potential for extensive parallelism to speedthe
processing ofbo th verticesandfragments. High-end graphicsprocessors now contain
multiple shader cores that can executevertex and fragment shaders in parallel.
GLSL has basic data typesthat are similar to Cand C++ The scalar types,however,
are presently limited to asingle floating-point type, float, asingle integer type,
int, and a Booleantype, b o o l Arrays and structures are declaredin the standard
C manner.
Trang 12462 Cha p t er 9 Progra mmable Shaders
GLSLintroduces new vector and matrix types for working with the 2 x 2,
3x3,and 4 x 4 matricesthat we use in computer graphics.Vectors are special
one-dimensional arrays. We can use floating-point (vec2, ve c 3,vec4), integer (ivec2, ivec3, ivec4), or Boolean (bvec2,bve c3, bvec4) types Becausethe vector types can be usedto storevertex positions, colors, or texture coordinates,the elements can
be indexedby position (x, y, z, w), color (r, g, b, a),ortexture coordinates (s, t , p,
of c Notethat thisflexibilityi s there only to create more readablecodeand carries
no semantic information Hence,there isno reason that c.y cannot contain color ortexture information.
Presendy, matrices in GLSL ar ealways square and floating-point (mat2, mat3,
C/C++referencing applies If m is amatrix, m [1] is its secondrow and m [1] [2] isthe element in row 2, column 3 GLSLsupports standard C one-dimensional arrays.Multi-dimensional array functionality can be obtained using C-type structures in
GLSL.
GLSL usesC++-style constructors to initialize the vectorsand matrix types, such
asin the following example:
vec3 a = vec3(1.0, -2 0,5 0);
Constructors also can be usedfor conversion between types asinvec2 b = vec2(a);
which uses thefirstt w o components ofthe vec3 variable a to form b.
Variables can b equalified in different ways Ordinary (nonlocal)variables, those that are not function parameters or function temporaries, can bequalified as at- tribute, uniform, varying, or const.The const qualifier is similar tobut more restrictive than in Cand makes the variable unchangeable by the shader Thus, we can create aconstant scalar and vector asfollows:
const float one = 1.0;
const vec3 origin=
vec2(1.0, 2.0,3 0 ) ;
Attribute-qualified variables are used by vertex shadersfor variables that change
at most once pervertex in the vertex shader There aretwo types of attribute-qualified
variables Thefirsttyp e includes the OpenGLstate va riables that we associatewith a vertex, such as its color, position, texture coordinates, and normal These are known
asbuilt-in variables In our simple example,the state variables gl_Vertex and gl_Color arebuilt-in attribute-qualified variables Built-in variables neednot be declared in shaders GLSLallows additional vertex attributes to bedefined in the ap- plication program sothat it can convey otherinformation on a per-vertex basis totheshader For example, in ascientific visualization application, we mi ght want to asso- ciate ascalar temperature or aflow velocity vector with each vertex.Only floating-
Trang 139.5 The OpenGL Shading Language 463
point types can be attribute qualified. User-defined attribute-qualified variables are
declared asin the following code:
attribute float tei^perature;
Because they vary on a vertex-by-vertex basis,vertex attributes cannot be declaredin
a fragment shader Attribute variables arealigned with variables in the application
program through OpenGL functions that we will explain in Section 9.6.When a
vertex shader is executed,the values of the attribute variables are thosethat were set
in theapplication and cannot be changed in the shader.
Uniform qualifiers are usedfor variables t hat are setin the application program
foran entire batch of primitives; th at is,variables whose values are assigned outside
the scope of a glBe gin and a glEnd Uniform variables provide a mechanism for
sharing data among an application program,vertex shaders, and fragment shaders.
Likeattribute variables, uniform variables cannot bechanged in a shader Hence, we
useuniform variables t o passinformation that is constant over abatch of primitives
into shaders For example, wemight want to compute the bounding box of a set
of verticesthat define a primitive in the application and send this information to a
shader to simplify its calculations.
Although naming a var iabl e as avarying variable may seem abit strange at first,
varying-qualified variables providethe mechanism for conveying datafrom a vertex
shader to afragment shader These variables aredefined on a per-vertex basisbut are
interpolated over theprimitive by th e rasteri zer.As with attribute-qualified variables,
varying variables can be eitherbuilt-in or user defined.
Consider,for example, how colors are determined in thefixed-functionp i p elin e
Thevertex processor computes a color or shadefor each vertex The color of a
frag-ment is determined by interpolating the vertex colors Likewise, texture coordinates
for each fragment are determined by interpolating texture coordinates at the vertices.
Both vertex colors and texture coordinates arevarying variables In our simple vertex
shader, gl_FrontC o l or is abuilt-in varying variable If we created a more complex
vertex shader that computed adifferent color for each vertex, using the
varying-qualified variable gl_F r ont C ol o r would ensure that there isan interpolated color
for each fragment, whether or not we writeour own fragment program.
User-definedvarying variables that are setin the vertex program are
automati-cally interpolated bythe rasterizer.It does not make sense todefine a varying variable
in a vertex shader and not use it in afragment shader Hence, a user-defined
vary-ing variable should appearin both the vertex andfragment shaders Consequendy, if
we definesuch a variable in the vertex shader, weshould write a corresponding
frag-ment shader Thus, while our simple ve rt exshader did not need the simple fragment
shader, th efunctionally equivalent vertex shader is asfollows:
const vec4 red = vec4(1.0, 0.0, 0.0, 1 0);
varying vec4 color_out; /* varying variable */
Trang 14464 C h a pte r 9 Programmable Shaders
void main(void)
igl_Position = gl_M odelVi e w Pr o jectionMat r ix*gl_V e r tex ;color_out = red;
9.5 3 Operators and Functions
For the most part, the operators areasin C with the same precedence rules.However,because the internal format of thefloatand integer types isnot specified by GLSL, bit operations arenot allowed 5
overloaded sothat matrix-vector operators can be used aswe would expect For
example,the code mat4 a;
vec4 b, c, d;
c =b*a;
d=a*b;
makes sense I n thefirst cas e, GLSL computes ctreating b as a rowmatrix, whereas
d is computed treating b as a column matrix Hence,although c and d arethe same type, they will have different values.
GLSL has aswizzling operator that is avariant of the C selection operator (.).
Swizzling allows us to selectmultiple components from the vector types We can use
swizzling and write maskingto selectand rearrange elements ofv e cto r s For example,
wecan change selected elements asinvec4 a = vec4(1.0 ,2 0, 3.0, 1 0);
a.x = 2.0;
a.yz =vec2(-1.0, 4 0);
5.Bit operations aresupported through extensions on many GPUs.
Trang 159.6 Linking Shaders with OpenGL Programs 4 65
or swap elements, as in thefollowing code:
Notethatwecanuseanyoftheformats(x,y,z,w;r,g,b,a;s,t,p,q)aslongaswe
donot mix them in singleselection.
GLSLhasmany built-in functions including the trigonometric functions (sin,
cos, t an), inverse trigonometric functions (asin, acos, atan), and mathematical
functions (pow, log2, sqrt, abs,max, min).Ofpar ticul ar importance arefunctions
that help with geometriccalculations involving vectors that arerequired in computer
function, anormalize function, and a reflect function Most ofthese functions
are overloaded sotha t they work with both floats and vectors Wewill see examples
of theseand other built-in functions in the examples.We will delay our discussion of
the texture functions until Section 9.10.
Variables that are function parameters necessitate afew additional options
Be-causeofhowvertexandfragment shadersare executedandthelackofpointers, GLSL
uses amechanism known as callbyvalue-return Function parameters arequalified
as in (the default), out, or inout GLSL functions havereturn types Returned
vari-ables are copiedback to the calling function. Input parameters are copied from the
calling program If a variable isqualified as in, it is copiedin but is not copied out,
even though its value may bechanged within the function Afunction parameter that
isqualified as o u t isundefined on entry to the function but can be setin the
func-tion and is copiedback tothe calling function A parameter that is inout-qualified
is copiedin and also copied out.
Notethat there are presently nopointers in GLSL Because vectorsand matrices
are basic types,they can be copied into and copied from functions.
9.6 LINKING SHADERSWITH OPENGL PROGRAMS
With GLSL, we canwrite shaders independent ofany OpenGL application. Various
development environments exist that let userswrite and test shaders However, at
some point, we haveto link the shaderswith the OpenGL application.
There is a set ofOpenGL functions that deals with how to create vertex and
shader objects, link them with an OpenGL application, and enablethe passing of
applica-tion:
1 Readthe shader source.
2 Create aprogram object.
3 Create shader objects.
4.Attach th e shader objects tothe program object.
5 Compilethe shaders.
Trang 16466 C a p t e 9 Programmable Shaders
6. Link everything together.
7 Selectcurrent program object.
get size of file */
fseek(fp, OL,SEEK_SET); /* go to beginning of file */
Supposethat our two shaderfilesa r egiven as follows:
GLchar vShaderFile[] = \"myVertexShader.glsl\";
GLchar fShaderFile[] = \"myFragmentShader.glsl\";
We can readthe shaders and check if thefilese x i st as follows:
vSource= readShaderSource(vShaderFile);
if(vSource == NULL)
\342\200\242C
Trang 179.6 Linking Shaders with OpenGL Programs 4 6 7
The program objectis acontainer that can hold multiple shaders and other GLSL
functions We createourshader objects in a similarway as follows:
Notethat at this point the actual shader code has not been associatedwith the pr
o-gram object.We create this association asfollows:
glShauerSource(vShader, 1 , (const GLchar**) fevSource, NULL);
glShaderSource(fShader, 1 , (const GLchar**) &fSource, NULL);
The first parameter isthe name of the shader objectinto which the source will
be loaded The second parameter isthe number of string buffers from which the
shader source will be loaded In our example, wehave read the source into asingle
buffer with our ReadShaderSource function The third parameter is apointer to
an array ofpointers to the string buffers,which for our example issimply a pointer
parameter indicates that thestri ngs are null terminated. Alternately, it could point to
an array ofbuffer length values We cannow compile the shaders asfollows:
glCompileShader(vShader);
glCompileShader(fShader);
At thispoint, we want to make surethat there are no errors in our shader code and
that it compiled without errors Thefunction
Trang 18468 C h a pte r 9 Programmable Shaders
static void checkError(GLint status, c on s t char *msg){
if (status !=GL_TRUE) {
checkError(status, \"Failed to compile the vertex shader.\;
glGetShaderiv(fShader, GL_COMPILE_STATUS, ftstatus);
checkError(status, \"Failed to compile the fragment shader.\"
);
Now we canlink everything together as follows:
glLinkProgram(myProgObj);
Because we can createmultiple program objects, we mus t identify which one to
use.For the program object that wejust created, we execute the functionglUseProgram(myProgObj);
glGetProgramiv(program,GL_LINK_STATUS, festatus);
checkError(status, \"Failed to link the shader program object.\;
The error checking we usedfor compiling and linking doesnot give the
infor-mation we needifth er e are any errors We can get moredetailbyusing the functions
and glGetProgramlnf oLogfor the program object The mechanismissimilar for shader and program objects.We use glGetShaderiv or glGetProgramiv to de-
termine if there has been an error and the length ofthe error message string We canthen gettheerror stringsusingglGetShad erInf oL og andglGetProgramln-
f oLog Thesample program in Appendix A uses thesefunctions
The next step isassociating variables in the shaderswith variables in the gram.Vertex attributes in the shader are indexedin the main program through tables
pro-6.For general OpenGL functions, wecan query for errors with thefunction glGetError and check
for the returned status GL_NQ_ERRQR.
Trang 199.6 Linking Shaders with OpenGL Programs 4 6 9
that are set up during linking We obtain the needed indicesthrough the function
the shader using thevarious forms of the function glVertexAttrib For example,
getits index as follows:
GLuint colorAttr;
colorAttr =glGetAttribLocation(myProgObj, \"myColor\;
Later, we can useitsvaluein the OpenGLprogram as
GLfloat color[4];
glVertexAttrib4fv(colorAttr, color);
which will set the initial value of myColor in thevertex shader.
A similar process holds for uniform variables Suppose that we compute a
uni-form variable a ngle in the application and want to sendit to the shader We getan
index
GLint angleParam;
angleParam =glGetUniformLocationCmyProgObj, \"angle\;
and later can compute avalue in the application program
GLfloat my_angle;
my_angle = 5.0; /* or some other value */
and sendit to the the shadersby
glUniformlf(angleParam, my_angle);
version and can b e used to set scalarand vector values For example,the code
GLfloat red[4] ={1.0, 0.0, 0.0, 1 0};
GLint ColorParam;
colorParam = glGetUniformLocation(myProg0bj, \"redColor\
glUniform4fv(colorParam, red)
setsthe uniform shader variable redColor Thevarious forms of the function
glU-nif ormMatrix are u se d to setuniform matrices in shaders.
We canquery the value of auniform variable from a shader through the
cannot be changed in the shader, we rarely need thesefunctions and will not discuss
them further
Trang 20470 C h a pte r 9 Programmable Shaders
Although there is afair number of OpenGL functions here,most of them are
usedonly dur i ng initialization and their usage doesno t change much from tion to application. Ifw e look at what we have to doin a typical application program,there are two basic parts.First, we set up various shadersand link them to program objects.Thispart can be put in an initialization function in the OpenGLprogram.Second, during the execution ofthe OpenGL program, we either get values fromshaders or send values to shaders.Usually, we do these operations where wewouldnormally set valuesfor colors,texture coordinates, normals, and other program vari- ableswith an application that uses thefixed-functionp ip eline
applica-Appendix A contains afull program with a basic vertex shaderand a basic ment shader Next, we focus onwhat we can do with vertex shaders.
frag-9.7 MOVING VERTICES
We now develop someexamplesofvertex shaders.Thefirst examplesinvolve moving vertices, something that is p a rt of many animation strategies.In each case, we must generate an initial vertex position in the OpenGLprogram because the execution
ofthe function glVertex initiates the vertex shader. In these first examples, thework done by the vertex shaders could have been donein your application program.
However, weoften want to do thiswork in the vertex shader becauseit frees up the
CPUfor other tasks The second set of examplesinvolves determining vertex col ors.
Wewill see that by writing our own vertex shaders we gain more control overthevertex processing and transfer much ofthe computation from the CPU tothe GPU.
9.7 1 Scaling Vertex Positions
One ofthe simplest examplesofa vertexshader is a program that changesthelocation
of each vertex that triggersthe shader However, we must rememberthat once we use a vertex program ofour own, we must doall the functions of the fixed-functionvertex processor In particular, we are responsiblefor converting vertex locations
from object coordinatesto clip coordinates.In our simplest examples, we can do
this operation by using th e model-viewand projection matrices that arepart of the OpenGL state.Alternately, we could compute the clip spaceposition of each vertex
in the shader without using the built-in matrices.
In our first example, we scale eachvertex so that the object appears to expand
and contract The scale factor variessinusoidally, based on a time parameterthat is passed in as aun if orm variable fromthe OpenGL program as follows:
uniform float time; /* value provided by application program */
void mainO
{
float s;
Trang 21Note that we use the product of the model-view and projection matrices from the
OpenGL state and that we apply the scalefactor only to the first three components
ofthe vector (see Exercise 9.19) Byapplying the scaling to gl_Vertex beforethe
vertex is transformed by theprojection and model-view matrices, t hescaling is done
in object coordinatesrather than in clip coordinates. In the application program, we
setup the time variable asfollows:
GLuint timeParameter;
timeParam =glGetUniformLocationGnyProgObj, \"time\;
The idle callback caneither increment the time or usethe elapsed time in glUnif orm
Note that elapsed time isin milliseconds, so we probablywant to scale it in the shader
by changing the scale factor in the shader For example, if we usethe line
time is converted to seconds.
We canmake this example somewhat more interesting if we let the variation
depend on the position of eachvertex Suppose that we start with a height field in
which the y value of each vertex isthe height and we change only this value In
Section 5.7, wedisplayed such data with adisplay callback or idle callbackusing code
glVertex3f((float)i/N, data[i][j], (float)(j+l)/N);
}
glEndO;
Trang 22472 Cha p t er 9 Programmable Shaders
where the height data is in the array d a t a and x and z aredefined over the range
(0,1) We canvary the heights with the vertex shader asfollows:
uniform float time;
/ / x frequency/ / z frequency / / height scale vec4 t =gl_Vertex;
of positions,thus defining the key frames Wethen interpolate between successive
frames orpositions to get the in-between frames or positions.
One variant of this idea is morphing, atechnique in which we smoothly change one objectinto another One way to accomplishthis change is to havethe set of vertices that define one objectchange t h e i r locations(and other attributes) to thoseofth e other object Let's lookat a simpleexamplewhere we havethe same number of
verticesin two arrays.
Suppose that we havetwo sets of vertices The first specifies object A and the secondspecifiesobjectB aspolygons If w e want to morph objectA into object B, we assumethat they have the samenumber of vertices and that corresponding vertices
arematched Thus, vertex k in thefirst array of vertices should morph into vertex k
in the secondarray In general, these sets areformed by the animator with the aid of
software that can create extra verticesto ensure that thetwo sets are of th e same size
with matching vertices Figure 9.5shows a two-dimensional polygon morphing intoanother two-dimensional polygon.
The vertex shader needstooutput asinglevertex that is constructed
byinterpo-lating between two vertices provided by the OpenGL program The main problem
isthat we can pass in only a single vertex through the built-in uniform variable gl_Vertex However, we can passin the corresponding vertex using an application-
Trang 239v7 Moving Vertices 473
FIGURE 9.5 Morphing.(a)ObjectA.(b) Object A 1/3morphed to object
B.(c) Object A 2/3 morphed to object B.(d) Object B.
much of eachvertex's location we should usein the interpolation Here is avertex
shader:
attribute vec4 vertices2;
uniform float blend;
The GLSLfunction mix forms the affine combination of the two vertices.Thus,
candefine the required vertex attribute v erti c e s 2and uniform parameter blend
asfollows:
GLuint blendParam, vertices2Param;
blendParam = glGetllniformLocation(program, \"
blend\;
vertices2Parant =glGetAttribLocation(program, \"vertices2\;
Within the display callback, weshould see code something likethe following:
fdefine N50 /*number of vertices*/
GLfloat vertices_two[N][3], vertices[N][3];
void mydisplayO
Trang 24474 C h a pte r 9 Programmable Shaders
{ blend = ; /* set value of blend */
glBegin(GL_TRIANGLES)
-CglUniform(blendParam, blend);
changes with time.
Vertex shaderswork well for particle systems One reason isthat we can have
the shader domuch of the work involvedin positioning each particle much fasterthan the CPU can Becausevertex shaders cannot create new verticesthe OpenGL program, at aminimum, must generate each particle as avertex so as t otrigger the vertex shader Aparticularly simple example, but one that can be extended easily
(Exercise 9.2),is to generateparticlesthat are subject only tothe forces of g r avity Consider an idealpoint subject to Newton's laws Supposethat it has an initialposition (XQ,y0, ZQ), and an initial velocity (vx, vy, vz) If we ignore effects such as friction, usingthe gravitational constant g, itsposition at time t isgiven by
x(t)=x0 +vxt,
y(0=y0+ y +*'.
z(t)=z0 +vgt.
Sf2 2
Trang 259.8 Vertex Lighting with Shaders 4 7 5
Thus, wecan pass the time and eachparticle'sinitial velocityto the shader asuniform
variables and pass in the particles'sinitial position through gl_Vertex Each time
that the vertex program is executed,it then computes the present position of the
vertex Becausethe time variableis updated bythe application, each time through the
display callback, the vertex shader computesnew vertex positions Here is asimple
vertex shad er forsuch a particle system:
attribute vec3 vel;
uniform float time, g;
void mainO
{
vec4 temp_position = gl_Vertex;
temp_position.x =temp_position.x +vel.x*time;
temp_position.y =temp_position.y +vel.y*time + g/(2.0)*time*time;
temp_position.z =temp_position.z + vel.z*time;
gl_Position = gl_ModelViewProjectionMatrix * temp_position);
Here we useattribute-qualified variables to pass per particle information to the
shader because wemay use a display callbackthat generates all the particleswithin a
single glBegin(GL_POINTS) and glEndO and each particle may have adifferent
velocity The time andgravity are assumed to bethe same for all particles.
We canextend this example in many ways Onesimple application that we can
createwith this shader is tosimulatefireworksb yhaving each vertex render as apoint
and sending each vertex tothe shader with the sameinitial position for each particle
but giving each vertex aslightly different initial velocity In addition, we can change
the color as time progressesand even make particles disappear (see Exercise 9.3).
Ifth e vertices are part of amore complex object than apoint, the entire object is
now subject to gravity Oneinteresting va ri ant is to create abouncing effect by using
(see Exercise 9.5).
9.8 VERTEX LIGHTING WITH SHADERS
Perhapsthe most important use of vertex shaders is to generate alternate lighting
models to thefixed-functionm odifi edPhong model We start by writing the Phong
and modified Phong (Blinn-Phong) lighting models as vertex shaders We do so
primarily to introduce the structure and functions used to compute lighting with a
vertex s hader Then wewill examine alternate lighting methods.
9.8.1 Phong Lighting
The modified Phong lighting model that we developed in Chapter 6 can bewritten
for a front-facing surface,without the distance terms, as
/=kdLd\\ \342\200\242n+^I,(n \342\200\242h)a+ kttLaIa.
Trang 26476 C a p t e 9 Programmable Shaders
Recallthat this expression is for any component of an RGB color Hence,all the constants for the diffuse, specular,and ambient reflection coefficients are arrays of three or four elements, asare t he corresponding components for the light sources.
When wewrite shaders that determine the color of avertex, we must be
shader, we must transform vertex locations from objectcoordinates to clip
coordi-nates However, we alsowork with normals, lights, and texture coordinates that may
beprovided in different coordinates For example,the light source position is tomatically transformed by the model-view matrix and is in eye coordinateswhen
au-we access it in avertex shader from the OpenGL state.Consequently, we often have
choices as towhich frame we wish to doour calculations Once we choosethe ingframe,w e have tobring all the required data into this frame For example, ifw e decide towork in eye coordinates, in addition totransforming gl_Vertex by gl_ ModelViewProjectionMatrix toobtain gl_P o si ti on for the fragment shader,then we also have totransform gl_Vertex by just the model-viewmatrix, gl_ ModelViewMatrix, to obtain avertex location in eye coordina te s
Let's examine the basics pieceby piece First, we must convert the vertex location to
clip coordinatesfor use by thefragments h ad er :
gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
Next, we computeall the necessary vectors in eye coordinates Theobject space normal vector, gl_Normal, is also abuilt-in uniform variable, but we must ensure that it has beentransformed into eye coordinates and normalized to unit length However, recallfromC h apte r 6 that because wemust preserve the angle betweenthenormal and the light source,' wemust transform the normal in object coordinates
by the inverse transpose ofthe 3 x 3, upper-left part of the model-view matrix The
matrix is called the normal matrix and is provided by the built-in uniform variable gl_NormalMartrix Wemust also ensure that the resulting vector has unit length.
We canaccomplish both tasks using thefollowing single line of code:
vec3 N = normalize(gl_NormalMatrix*gl_Normal);
We will also need the representaticn of the vertex in eye coordinates asfollows:
vec4 eyePosition =gl_ModelViewMatrix * gl_Vertex;
All the light parameters forlight source i are available asbuilt-in uniform variables through thebuilt-in structure gl_LightSource [i] We needthe light position to
compute the light source ve ctor:
Trang 279.8Vertex Lighting with Shaders 4 77
In eye coordinates, the vector tothe eye, which is at the origin in the eyeframe,i s as
Thehalfway vector is simply asfollows:
We now have all theinformation that we need to compute avertex color using either
the Phong or modified Phong model Thematerial values for the front face arein
thebuilt-in structure gl_FrontMaterial Neglectingthedistance term,the diffuse
component of the colorisgiven as follows:
vec4 diffuse =max(dot(L, N),0 0)*gl_FrontMaterial.diffuse
*gl_LightSource[0].diffuse;
GLSL has abuilt-in variable gl_FrontLightProduct for the terms that are the
products ofthe front material properties and the light properties for each source,
which enables us to writethe diffuse term as follows:
vec4 diffuse =max(dot(L, N),0 0)*gl_FrontLightProduct[0].diffuse;
Using the same structure, ambient color is given asfollows:
vec4 ambient = gl_FrontLightProduct[0].ambient;
Thediffuse and ambient contributions forthe vertex color are the samein the Phong
and modified Phong models The modified Phong model usesthe following
reflec-tion term:
specular =f*pow(max(dot(N, H), 0.0), gl_FrontMaterial.shininess)
*gl_FrontLigntProduct[0].specular;
Thefactor f is 0 if dot (L, N) is negative and 1otherwise so that there will be no
specular contribution if the light source is belowthe surface Although we haveall
the information we need for the Phong and modified Phong models, we have toalter
the shininess coefficient from the application if we want thetwo lighting models to
provide similar images One possibility is to havethe application provide a correction
factor as in the following code:
Trang 28478 Cha p t er 9 Programmable Shaders
*gl_FrontLightProduct[0].specular;
See Exercise 9.9for how we might compute such a factor We can now add up the
contributions asthe color for the front face:
gl_FrontColor =ambient +diffuse +specular;
We have been using built-in uniform variables to obtain most ofthe required values from the OpenGLstate.In other situations, we might passin values through thefunction parameters ornon-built-in uniform variables Ifwe areusing a fragment program, often wewill use varying variables to pass theresults of the vertex programonward We will see more examplesofthis type when we cons idertexture mapping
vec4 eyeLightPos = glJLightSource[0].position;
vec3 N = normalize(gl_NormalMatrix * gl_Normal);
vec3L = normalize(eyeLightPos.xyz -eyePosition.xyz);
Vertex shaders make it possibleto incorporate more realisticlighting models in real
time. In Section 9.11, we consider effects such as refraction and chromatic
Trang 299.9 Fragment Shaders 4 79
shaders to create ncnphotorealistic effects Twointeresting examples are the use of
only a few colors and emphasizing the edges in objects.Both these effects are
tech-niques that we might want to use toobtain a cartoonlike effect in an image.
Supposeth at we define two colorsin our shader:
const vec4 red = {1.0,0.0, 0.0, 1.0};
We could then switch betweenthe colors based, for example, onthe magnitude of the
diffuse color Using the light and normal vectors, wecould assign colors as follows:
if(dot(L, N)) >0.5) gl_FrontColor = yellow;
else gl_FrontColor = red;
Although we could have usedtwo colors in simpler ways,by using the diffuse color to
determine athreshold, the color oft h e objectchangeswith its shape and theposition
of the light source.
We can alsotryto draw the silhouette edge ofan object One wayto identify such
edges is to lookat sign changes in dot (E, N).This value should be positivefor any
vertex facing theviewer and negative for a vertex pointed away from the viewer Thus,
we cantestfor small values ofthi svalue and assign a color such as blacktothe vertex:
float t =0.1 // or some other small value
if(abs(dot(E, N)) <t) glFrontColor = black;
Color plate 11 shows the Utah teapot shaded with three colorsusing black to show
the silhouette edge.
Whereasvertex shaders are part ofthe geometric processing at the front end of the
graphicspipeline, fragment shaders work on individual fragments, eachofwhi ch can
contribute directly to the color of a pixel.Consequently, fragment shaders open up
myriad possibilities for producing an image on apixel-by-pixel basis.
One of the mostinteresting aspects of fragment shaders isthat they enable us to
work with texture mapsin new ways.In particular, we willsee how to use cubemaps
to create effects such asrefraction in environment maps and displacement maps.
Fragment shaders allow many more possibilities than do vertex shaders, and some
of these possibilities can increasethe efficiency of our programs.
Syntactically, fragment and vertex shadersin GLSL are almost identical Wehave
the same data types, most ofthe same qualifiers, and the samefunctions However,
Trang 30480 C h a pte r 9 Programmable Shaders
fragment shaders execute in afundamentally different manner Let's reviewwhat the fragment processor d oe s
The rasterizer generates fragments for all primitives that have not been clipped
out Each fragment corresponds to apixel in the frame buffer and has attributesincluding its color andposition in the frame buffer Other attributes are determined from vertexattributes by the rasterizer interpolating valuesat the vertices Thus, evenwith a pass-throughfragments h ad ersuch as
void main(void){
gl_FragColor = gl_Color;
}
the color ofeachfragmenti s interpolated from the vertex colors,whether these ors were setby the application through the OpenGL state,computed in the fixed- function pipeline, or computedwith a vertex shader Likewise,the texture coordi- nates of eachfragmenta r einterpolated from the texture coordinates ofthe vertices, regardless of how they were assigned.Inaddition, anyvarying variable output bythe vertexshader will be interpolated automatically across eachprimitive
a primitive in afragments ha d e r , we can dolighting calculations on a fragment basis rather than on avertex-by-vertex basis Consider polygonal shading
In the simplest,constant shading, we u sed asingle color for the entirepolygon. This color could either be setin the application or calculated in the pipeline using the modified Phong model at thefirstv erte x This option is specifiedin OpenGL by set-
tingthe shading model to GL_FLAT The secondmethod that we discussed is smooth
shading, the default for thefixed-functionp i p eline Here colors are computedforeach vertex as part ofvertex processing and then interpolated acrossthe polygon by the fragment pro cessor This method works the same asthe modified Phong vertex
shader that we developed in Section 9.8.1 Thefragment processor does not knowhow the vertex colors were producedand simply interpolates vertex colors acrossthe
polygon
The third method, Phong shading, is based on interpolating vertex positionsand normals, rather than interpolating vertex colors, acrossthe polygon and then applying the lighting model at each fragment using the interpolated normals Un-til the advent of programmable fragment programs,per-fragment shading was only
possible asan off-line process and thus incurred a considerable time penalty
Trang 31fixed-9.10 Per-Vertex Versus Per-Fragment Lighting 4 8 1
function pipeline With a fragment shader, we can do the computation on a
per-fragment basis.Given the speed of recent GPUs,even though the lighting
calcula-tions are done for each fragment, we can still process complex geometric models
interactively
Let's now wri t e afragmentsha d e r to doper-fragment, modified Phong shading
with a single light source (GL_LIGHT0) Becausewe need to transfer data fromthe
vertex shader to the fragment shader, we needto write both shaders Wewill use a
slightly different approach from Section 9.8.1, onethat makes use ofvarying
vari-ables.
In thevertex program, we compute theposition and normal in eye coordinates
and pass them to the fragment program. In this example, we assumethat the
ver-tex normal that the vertex program startswith is available from the OpenGLstate.
Notethat whether the normal changes for each vertex or remains constant for the
entire primitive isnot relevant to the shader Here is abasic vertex program that
computes the same vectors as did the modified Phong per-vertex shader but does
not compute any ofth e light terms, leaving the lighting computation tothe fragment
note that we neednot compute the halfway vectorin the vertex shader
We can accessall the light and material properties that we need in the fragment
shaderfromt h eOpenGLstate Becausethe normal vector, lightvector, and eyevector
interpolated across the primitive automatically. Consequently, the fragment shader
looksvery similar to the modifiedPhong vertex shader from Section 9.8.1.
Trang 32482 Cha p t er 9 Programmable Shaders
//Fragment shader forper-pixel Phong shading.
varying ve c3Nvarying ve c3 L
varying vec3Evoid mainO
\342\200\242Cvec3 Normal = normalize(N);
vec3Light =normalize(L);
vec3 Eye = normalize(E);
vec3Half = normalize(Eye+Light);
floatf=1.0;
float Kd = max(dot(Normal, Light), 0.0);
float Ks =pow(max(dot(Half, Normal), 0.0),
gl_FrontMaterial.shininess);
vec4 diffuse =Kd*
gl_FrontLightProduct[0].diffuse;
if(dot(Normal, Light)<0.0) f =0 0;
vec4 specular =f*Ks * gl_FrontLightProduct[0].specular;
vec4 ambient =gl_FrontLightProduct[0].ambient;
gl_FragColor =ambient + diffuse +specular;
Figures 9.6 and 9.7 and Color Plate 26show the difference between per-vertexand per-fragment shading on the teapot.Each was generated using the samemate-
rial and light properties Notethe greater detail on the specularhighlight with
FIGURE 9.6 Per-vertex Phong shading.
Trang 339.11 Samplers 483
FIGURE 9.7 Per-fragment Phong shading.
9.11 SA M P L E R S
Thevertex shader typically computes or passesalong the texture coordinates, which
areinterpolated by the rasterizer to producetexture coordinates for each fragment.
Textures are applied until fragment processing.Consequently, it is not surprising
that many of the most interesting uses of programmable pipelines involve texture
sampling in fragment shaders.
Probably the greatest difficulty in dealingwith textures is the largenumber of
options that areavailable, including the wrapping mode, filtering, mipmapping, the
texture matrix, and packing parameters In addition, because wecan create multiple
texture objects, there can bemultiple textures available to the shader Ifthe
imple-mentation supports multiple texture units, the shader should be ableto sample any
A samplervariable provides access to a particular texture object, including
all its parameters There are samplertypes for the types oftextures supported by
OpenGL In particular, there are samplers for one-dimensional (sampl'erlD),
two-dimensional (sampler2D), three-dimensional (sampler3D), and cube-map
(sam-plerCube)textures
Generally, the vertex shader needonly worry about the texture coordinates,
leaving it to the fragment shader to apply the texture using a sampler.Samplers
are passed to the shader fromthe application using the function glUnif ormli to
texture objectin the application as wedid in Chapter 8 as th edefault texture, which
corresponds to texture unit 0 After we compileand link the shaders for program
objectmyProgObj , the texture object is madeknown tothefragments h a d erthrough
a uniform variable that we can setup in the application program as
GLuint texMapLocation;
texMapLocation = glGetUniforniLocation(myProgObj, \"texMap\;
Trang 34ver-tex orfragment shaders Once a sampler is defined, we can use it to provide a
texture value using one ofthe texture functions texturelD, texture2D,
by the application to the vertex program through the built-in vertex attribute gl_MultiTexCoord, which can then beinterpolated by th e rasterizer to provide texturecoordinates in the vector gl_TexCoord.Alternately, texture coordinates can be pr o-
vided as vertex attributes tothe shaders or computed directlyin the shaders Here is
aminimal vertex program:
\342\200\242Cgl_Position =gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] =gl.MultiTexCoordO;
>
It computes the required vertex location in clip coordinates and provides texturecoordinates from texture unit 0.
Afragmentsha d e rthat simply forms a colortotally from the two-dimensional
texture map is asfollows:
uniform sampler2D texMap;
void mainO {
gl_FragColor =texture2D(texMap, gl_TexCoord[0].st);
}
If we pass in our own texture coordinates to the vertex shader through a vertex
attribute, we can use avarying variable to pass them on tothe fragment shader The
vertex shadermight have lines of codesuch as as follows:
attribute vec2 vertexTexST;
varying vec2 texcoord;
Thefragment shader would then use amatching varying texcoord to access thesampler.
Trang 359.12 Cube Maps 4 85
A simple but less trivial example might combine texture mapping with
normal and light positions in addition toproviding the texture coordinates:
vec4 eyeLightPos = gl _LightSource[0].position;
vec3 Normal = normalize(N);
vec3 Light =normalize(L);
vec4diffuse =Kd *
gl_FrontLightProduct[0].diffuse;
vec4 texColor =texture2D(texMap, gl_TexCoord[0].st);
gl_FragColor = diffuse*texColor;
The only major difference between this example andthe fixed-function pipeline
following two sections, weexplore some applications of fragment shaders that we
cannot even approximate with the fixed-function pipeline.
9.12 CUBE MAPS
In Chapter 8, wesaw how to define a cu bemap texture from 6 two-dimensional
In the applications, we can setup a cube texture objectas before and set up auniform
variable, which sets thetexture unit to the default unit (or towhatever unit is used by
theapplication) as follows:
Trang 36486 Chapter 9 Programmable Shaders
FIGURE 9.8 Reflection cube map.
texMapLocation =glGetUniformLocation(myProgObj, \"
myCube\;
GLuint texunit =0;
glUniformli(texMapLocation, texunit);
We obtain a texture valuein the fragment shader from the sampler using the
interpolated three-dimensional texture coordinates from t herasterizer as follows: uniform samplerCube myCube;
varying vec3 texcoord;
vec4 texColor =textureCube(myCube, texcoord);
The texture coordinates can comefrom the application through avarying variable from the vertexshader, from the OpenGL state, or bedetermined totally in the frag- ment shader Wenow look at some interesting ways of choosing these coordinates 9.12.1 Reflection Maps
The required computations for a reflection orenvironment map are shown in
Fig-ure 9.8.We assume that the environment hasalready been mapped to the cube The
difference between a reflection mapand a simple cube texture map isthat we use the
reflection vectorto accessthe texture for a reflection maprather than the view vector.
We cancompute the reflection vector at each vertex in our vertex program and thenlet thefragmentp r ogr am interpolate these values overthe primitive.
Our vertexshader is a simplified version ofthe vertex shader for per-fragment
Trang 379.12 Cube Maps 487
vec4 eyePos = gl_ModelViewMatrix*gl_Vertex;
It computes the reflection vectorin eye coordinates as avarying variable If we want
the color to betotally determined by the texture, the fragment shader is simply as
We can create more complex lighting by having the color determined in part by the
specular,diffuse, and ambient lighting as we didfor the modified Phong lighting.
However, wemust, be careful as towhich frame we want to usein our shaders.
The differencebetween this example and previous onesisthat the environment map
usually is computed in world coordinates Objectpositions and normals are specified
in object' coordinates and arebrought into the worldframeb y modeling
transforma-tions in theapplication. We usually nev er seethe object-coordinate representation of
objects becausethe model-view transformation converts object coordinatesdirecdy
mod-elingtransformations so that model and object coordinates arethe same However,
we want towrite our program in a manner that allows for modeling transformations
when we do reflection mapping Oneway t o accomplish this task isto compute the
modeling matrix in the application and passit to the fragment program as a
uni-form variable Also note that we need the inverse transpose ofthe modeling matrix
to transform the normal However, if we passin the inverse matrix asanother
uni-form variable, we can postmultiply the normal to obtain the desired result Color
Plate 12shows the use of areflection map to determine the colors on the teapot The
teapot is set inside a cube,each of whose sides is one ofthe colors red, green, blue,
cyan, magenta, or yellow.
Our discussion of environment and reflection maps thus far has assumed that the
reflective object is opaque.We can also assume that the object istranslucent, with
part of the light we seeonthe surface determined by shading,part by reflecting the
environment onto the surface, and part by transmission oflight through the object
sothat we can see acontribution from the environment behind the object.
Before we canwrite the required^programs, we need a modelfor how light passes
through atranslucent material Consider a surfacethat transmits all the light that
strikesit, as shown in Figure 9.9 Ifthe speed of light differs in thetwo materials, the FIGURE 9.9
transmission
Perfect light
Trang 38We can find thedirection of the transmitted light t asfollows We k no w cos 0jfrom
n and 1;if they have been normalized, it is simply their dot product Given cos Qt,
wecan find sin 9 },and then sin 9t from Snell'slaw Finally, we can compute cos 0t.
Letting r\\ = i)tlf]\\-, we have
,et=(i-\302\261(i-cos
2 el)y.
Just asin the computation of the reflected light, the thre'e vectorsmust be coplanar; thus,
The first two negative signs in this equation are a consequence oft pointing away
from the back side ofthe surface The angle forwhich the square-root term in the expression for cos 9t becomes zero (sin 0t = JJ) is known as the critical angle.Iflight strikes the surfaceat this angle, the transmitted light is in a direction along the surface If9X is increased further, all light is reflected, and none istransmitted
Suppose we have an objectthat is composed of atranslucent material in which
thespeed ofUght is less than thespeed ofUghtin air Thus, a Ughtray passing directly through it is bent and intersects points inside the objectat a place determined by
Snell'slaw If we use a cubemap for the environment, the program lookssimilar to the previous one except weusethe refracted vector instead ofthe reflected vector We can computethis vector from the normal and position using the built-in functionrefract Thus, vertex shader hasthe code
varying vec3 RF; // refraction vector
RF = refract(E, N , eta);
whereE iscomputed as before In thefragment shader, the code is asfollows:
Trang 39Maps 489
varying vec3 RF;
uniform samplerCube texMap;
vec4 texColor =textureCubeCtexMap, RF);
A more general expression forwhat happens at a transmitting surface
corre-sponds to Figure 9.10 Somelight is transmitted, some is reflected, and the rest is
absorbed Ofthe transmitted light, some is scattered in a manner similar to
specu-lar reflections, except that herethe light is concentrated in the direction of t Thus,
atransmission model might include a term proportional to t \342\200\242v for viewers onthe
transmitted side of th e surface We can alsousethe analogy of a half-angle to simplify
calculation of this term (see Exercise 9.14).
We can do a few more things with a simple fragment shader and our reflected
and transmitted vectors to create more realistic effects When light strikes a real
translucent material, some of the light is absorbed, some is reflected, and some is
refracted through the material Unlike the simplelighting models that we have used
sofar, the fraction of light refracted and the amount reflected depend onthe angle
between the light and the normal In addition, the relationship also depends on the
wavelength and polarization of the light. The physical relationship isgiven by the
Fresnel equation. Not only is discussion ofthe Fresnel equation beyond the scope of
this book (see Suggested Readings),but its complexity is counter tothe simplicity we
want forefficiency ofo ur programs Consequently, we can usean approximation that
can be computedin a fragment or vertex program.
colors we getfrom an environment map by following the reflected and refracted
angles, respectively The colorthat we will use isan affine combination of these two
colorsdetermined by a coefficient r Thus, the fragment color is given by Cp
C/= rCr+ (l-r)Ct.
The value of r should approximate the value we would obtain for the Fresnel term
if we were able to computeit easily If we letthis term depend on the cosine ofthe
angle between the light andthe normal, we can use 1\342\200\242n, the same easily computed
termthat we use for diffuse reflections;it ranges from 0 to 1 asthe angle ranges from
\342\200\22490degrees to 90 degrees If we raisethis term to a power, as we dowith specular
Trang 40490 Cha p t er 9 Programmable Shaders
We can alsotry t o simulate th eeffect of th e coefficient ofrefraction's dependence
on wavelength Shorterwavelengths are refracted more than longerwavelengths wh en they enter a slower material Thiseffect accounts for the rainbows createdby prisms Because we usean RGB model and process only three components rather than allwavelengths, we can only approximate this effect, known as chromatic dispersion.
If we use adifferent value of the refraction index for each color component (see
atypical implementation does three multiplications to squarethe components, two additions, and asquare root If this operation is in a fragment program, we are
requiring the GPU to doquite a bit of computation becauseit must carry out thesearithmetic operations on every fragment.
Considerall points along a line fromthe origin The vector between any two of these points and, in particular, the vector determined by any point on the line and the origin all normalize to the same vector Supposethat we look at where such a vector intersects the cubemap.We can p ut the components ofthe normalized vector
at thesetexture coordinates Of cou r s e,we must precompute these values toform the cube map, but as weshall see, this computation isfairly easy.
Consider the cube mapshown in Figure 9.11 The sides aredetermined bythesix planesx =\302\2611,y =\302\2611,andz= \302\2611.Ifweplaceagivenvectorv= (a,b,c)withone
endat the origin, the sideofth e cube that it intersects is determined by the magnitude and sign ofthe largest component Thus, the vector (1.0, 2.0,3.0)intersects the side determined by th e plane 2 = 1,whereas th e vector (1.0,\342\200\2243.0,2 0) intersects the side
determined by the plane y = \342\200\2241 In the first case,the point of intersection is (0.33,
The components of a normalized vectorhave a range of \342\200\2241.0to 1.0, whereas
texture components are colorsand range over 0.0 to 1.0.We can store normalized
vectors as colorsif we first compress the ranges of the components The simple
transformation function