1. Trang chủ
  2. » Công Nghệ Thông Tin

3D Graphics with OpenGL ES and M3G- P35 pptx

10 174 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 10
Dung lượng 172,55 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Note that unlike vertex arrays and buffers, the index buffers cannot be modified once created—this was seen as an unnecessary feature and left out in the name of implementa-tion complexit

Trang 1

LOW-LEVEL MODELING IN M3G

is not very useful for most real-world meshes, but can save some space in certain special

cases and procedurally generated meshes

The other alternative, explicit strips, is what you normally want to use Here, the difference

is that instead of a starting vertex, you give a list of indices:

TriangleStripArray(int[]indices,int[]stripLengths)

The indices array is then split into as many separate triangle strips as specified in the

stripLengths array For a simple example, let us construct an IndexBuffer

contain-ing two strips, one with five and the other with three vertices; this translates into three

and one triangles, respectively

static final int myStripIndices[] = { 0, 3, 1, 4, 5, 7, 6, 8 };

static final int myStripLengths[] = { 5, 3 };

myTriangles = new TriangleStripArray(myStripIndices, myStripLengths);

Performance tip: While triangle strips in general are very efficient, there is a

consider-able setup cost associated with rendering each strip Very small strips are therefore not

efficient to render at all, and it is important to try to keep your strips as long as

possi-ble It may even be beneficial to join several strips together using degenerate triangles,

duplicating the end index of the first and the beginning index of the second strip so

that zero-area triangles are created to join the two strips Such degenerate triangles are

detected and quickly discarded by most M3G implementations

As usual, your mileage may vary With the large spectrum of M3G-enabled devices

out there, some software-only implementations may in fact be able to render short

strips fairly efficiently, whereas some other implementations may optimize the strips

themselves regardless of how you specify them Submitting already optimized strips may

therefore yield little or no benefit on some devices

Note that unlike vertex arrays and buffers, the index buffers cannot be modified once

created—this was seen as an unnecessary feature and left out in the name of

implementa-tion complexity, but perhaps overlooked the fact that someone might still want to recycle

an IndexBuffer rather than create a new one Nevertheless, index buffers need not

have a one-to-one correspondence to vertices, and you can use as many index buffers per

vertex buffer as you want to The index buffer size does not have to match the size of the

vertex buffer, and you can reference any vertex from multiple index buffers, or multiple

times from the same index buffer The only restriction is that you may not index outside

of the vertex buffer As a concrete use case, you could implement a level-of-detail scheme

by generating multiple index buffers for your vertex buffer, with fewer vertices used for

each lower detail level, and quickly select one of them each time you render based on the

distance to the camera or some other metric

Trang 2

14.1.4 EXAMPLE

Now we know how to build some geometry in M3G and draw it Let us illustrate this with a more comprehensive example where we create some colored triangles and render them We assume that you have set up your Graphics3D and Camera as described

in Chapter 13; make sure your Camera sits at its default position and orientation at the origin You will also see that we construct an Appearance object, which we have not described yet In this example, it merely tells M3G to use the default shading parameters—

we will cover Appearance in detail in the next section

First, let us define our vertex and triangle data You could put these as static members in one of your Java classes

static final byte positions[] = { 0,100,0, 100,0,0, 0, — 100,0, — 100,0,0,

0,50,0, 45,20,0, — 45,20,0 };

static final byte colors[] = { 0,0,255,255, 0,255,0,255, 255,0,0,255,

255,255,255,255, 255,0,255,255, 255,255,0,255, 0,255,255,255 };

static final int indices[] = { 0,3,1, 1,3,2, 4,6,5 };

static final int strips[] = { 3, 3, 3 };

Note that since M3G only supports triangle strips, individual triangles must be declared as strips of three vertices each This is not a problem when exporting from a content creation tool that creates the strips automatically, but is a small nuisance when constructing small test applications by hand In this case, we could also easily combine the first two triangles into a single strip like this:

static final int indices[] = { 0,3,1,2, 4,6,5 };

static final int strips[] = { 4, 3 };

Once we have the vertex and index data in place, we can create the objects representing our polygon mesh You would typically place this code in the initialization phase of your application

// Create the vertex arrays VertexArray myPositionArray = new VertexArray(7, 3, 1);

VertexArray myColorArray = new VertexArray(7, 4, 1);

// Set values for 7 vertices starting at vertex 0 myPositionArray.set(0, 7, positions);

myColorArray.set(0, 7, colors);

// Create the vertex buffer; for the vertex positions, // we set the scale to 1.0 and the bias to zero VertexBuffer myVertices = new VertexBuffer();

myVertices.setPositions(myPositionArray, 1.0f, null);

myVertices.setColors(myColorArray);

// Create the indices for five triangles as explicit triangle strips

Trang 3

LOW-LEVEL MODELING IN M3G

// Use the default shading parameters

Appearance myAppearance = new Appearance();

// Set up a modeling transformation

Transform myModelTransform = new Transform();

myModelTransform.postTranslate(0.0f, 0.0f, — 150.0f);

With all of that done, we can proceed to rendering the mesh You will normally do this in

the paint method for your MIDP Canvas

void paint(Graphics g) {

Graphics3D g3d = Graphics3D.getInstance();

try {

g3d.bindTarget(g);

g3d.clear(null);

g3d.render(myVertices, myTriangles, myAppearance, myModelTransform);

}

finally {

g3d.releaseTarget();

}

}

Assuming everything went well, you should now see your geometry in the middle of the

screen You can play around with your vertex and index setup, modeling transformation,

and camera settings to see what happens

14.2 ADDING COLOR AND LIGHT: Appearance

You can now create a plain polygon mesh and draw it To make your meshes look more

interesting, we will next take a closer look at the Appearance class we met in the

pre-vious section This is one of the most powerful classes in M3G, providing a wide range

of control over the rendering and compositing process of each mesh An Appearance

object is needed for everything you render in M3G, so let us begin with the simplest

pos-sible example—use the default rendering parameters as we already did above

Appearance myAppearance = new Appearance();

In fact, the Appearance class in itself does very little There is only one piece of data

native to an Appearance object—the rendering layer—and we will not need that until

we start using the M3G scene graph Instead, the functionality of Appearance is split

into five component classes Each of the component classes wraps a logical section of the

low-level rendering pipeline, so that together they cover most of the rendering state of

OpenGL ES You can then collect into an Appearance object only the state you want to

control explicitly and leave the rest to null, saving you from the hassle of doing a lot of

settings, and letting you share state data between different meshes We will see how this

works in practice, as we follow the rendering pipeline through the individual component

classes

Trang 4

14.2.1 PolygonMode

The PolygonMode class affects how your input geometry is interpreted and treated at

a triangle level It allows you to set your winding, culling, and shading modes, as well as control some lighting parameters and perspective correction

By default, M3G assumes that your input triangles wind counterclockwise and that only the front side of each triangle should be drawn and lit Triangles are shaded using Gouraud shading, and local camera lighting and perspective correction are not explicitly required You can override any of these settings by creating a PolygonMode object, specify-ing the settspecify-ings you want to change, and includspecify-ing the object into your Appearance object For example, to render both sides of your mesh with full lighting and perspective correction, use PolygonMode as follows:

PolygonMode myPolygonMode = new PolygonMode();

myAppearance.setPolygonMode(myPolygonMode);

myPolygonMode.setCulling(PolygonMode.CULL_NONE);

myPolygonMode.setTwoSidedLightingEnable(true);

myPolygonMode.setPerspectiveCorrectionEnable(true);

myPolygonMode.setLocalCameraLightingEnable(true);

For the setCulling function, you can set any of CULL_BACK, CULL_FRONT, and CULL_NONE The setTwoSidedLightingEnable function controls whether the vertex normals are flipped when computing lighting for the back side of triangles (should they not be culled), and setWinding controls which side of your

triangles is the front side For setWinding, you have the options WINDING_CCW

and WINDING_CW Additionally, there is setShading, where the default of SHADE_SMOOTHproduces Gouraud-shaded and SHADE_FLAT flat-shaded triangles You may wish to refer to Section 9.1 for the equivalent OpenGL ES functions

Finally, it is worth pointing out that the perspective correction and local camera lighting flags are only hints to the implementation The very low-end implementations may not support perspective correction at all, and local camera lighting is unsupported in most implementations that we know of If supported, both come at a cost, especially on

soft-ware renderers, so you should pay attention to only using them where necessary Do use

them where necessary, though: when rendering slanted, textured surfaces made of large triangles, the possible performance gain of disabling perspective correction is not usually worth the resulting visual artifacts

Pitfall: There is quite a lot of variety in the speed and quality of perspective correction

among different M3G implementations What works for one implementation, may not work for others For quality metrics you can refer to benchmark applications such as JBenchmark

Trang 5

LOW-LEVEL MODELING IN M3G

14.2.2 Material

The Material class is where you specify the lighting parameters for a mesh in M3G

Putting a non-null Material into your Appearance implicitly enables lighting for

all meshes rendered using that Appearance

M3G uses the traditional OpenGL lighting model as explained in Section 3.2 If you are

familiar with OpenGL lighting (see Section 8.3.1), you will find the same parameters

in M3G

The setColor(int target, int argb) function lets you set each of the material

parameters with target set to AMBIENT, DIFFUSE, SPECULAR, and EMISSIVE,

respectively The alpha component of the color is only used for DIFFUSE You

can also make the ambient and diffuse components track the vertex color with

setVertexColorTrackingEnable(true) Additionally, you can specify the

specular exponent with setShininess If you want something resembling red plastic,

you could set it up like this:

redPlastic = new Material();

redPlastic.setColor(Material.AMBIENT, 0xFF0000); // red

redPlastic.setColor(Material.DIFFUSE, 0xFFFF0000); // opaque red

redPlastic.setColor(Material.SPECULAR, 0xFFFFFF); // white

redPlastic.setColor(Material.EMISSIVE, 0x000000); // black

redPlastic.setShininess(2.0f);

A shinier material, something like gold, could look like this:

golden = new Material();

golden.setColor(Material.AMBIENT, 0xFFDD44); // yellowish orange

golden.setColor(Material.DIFFUSE, 0xFFFFDD44); // opaque yellowish orange

golden.setColor(Material.SPECULAR, 0xFFDD44); // yellowish orange

golden.setColor(Material.EMISSIVE, 0x000000); // black

golden.setShininess(100.0f);

You can also bitwise-OR the color specifiers for setColor, for example setColor

(Material.AMBIENT | Material.DIFFUSE, 0xFFFFFFFF), to set multiple

components to the same value

Materials need light to interact with If you try to use Material alone, only the emissive

component will produce other than black results Light is provided through light sources,

which we will discuss later in Section 14.3.2, but for a quick start, you can just create a

default light source and put it into your Graphics3D like this:

Light myLight = new Light();

g3d.addLight(myLight, null);

Since both the light and the camera have the same transformation (now null), that light

will be shining from the origin in the same direction as your camera is looking, and you

should get some light on the materials

Trang 6

14.2.3 Texture2D

Texturing lets you add detail beyond vertex positions, colors, and normals to your surfaces—look at the low-polygon bikers in Figure 14.1 for an example After lighting,

your triangles are rasterized and converted into fragments or, roughly, individual pixels.

Texturing then takes an Image2D and combines that with the interpolated post-lighting color of each fragment using one of a few predefined functions

To enable texturing, add a Texture2D object into your Appearance A valid texture image must be specified at all times, so the constructor takes a reference to an Image2D object You can, however, change the image later with a call to setImage Texture images must have power-of-two dimensions, and neither dimension may exceed the maximum texture dimension queriable with Graphics3D.getProperties Assuming that we have such an image, called myTextureImage, we can proceed:

Texture2D myTexture = new Texture2D(myTextureImage);

myAppearance.setTexture(0, myTexture);

Note the0 in the setTexture call: that is the index of the texturing unit At least one unit is guaranteed to exist, but multi-texturing support is optional for M3G tions You can query the number of texturing units available in a particular implementa-tion, again via Graphics3D.getProperties If the implementation supports two texturing units, you will also have unit1 at your disposal, and so forth In this case each additional texturing unit further modifies the output of the previous unit

Wrapping and filtering modes

When sampling from the texture image, M3G takes your input texture coordinates, interpolated for each fragment, and maps them to the image The top left-hand corner

of the image is the origin,(0, 0) The bottom right corner is (1, 1) By default, the texture coordinates wrap around so that if your coordinates go from−1.0 to +3.0, for example,

F i g u r e 14.1: Texturing can add a lot of detail to low-polygon models, allowing large numbers of them on-screen without excessive geometry loads (Images copyright cDigital Chocolate.)

Trang 7

LOW-LEVEL MODELING IN M3G

the texture image will repeat four times You can control this behavior with

setWrapping(int wrapS, int wrapT), where wrapS and wrapT can be either

WRAP_REPEAT or WRAP_CLAMP The latter will, instead of repeating, clamp that

coordinate to the center of the edge pixel of the image These are equivalent to the

texture wrapping modes in OpenGL ES (Section 9.2.4) If your texture has a pattern

that is only designed to tile smoothly in the horizontal direction, for example, you may

want to disable wrapping in the vertical direction with

myTexture.setWrapping(Texture2D.WRAP_REPEAT, Texture2D.WRAP_CLAMP);

Once the sampling point inside the texture is determined, M3G can either pick the

clos-est texel or perform some combination of mipmapping and bilinear filtering, similarly

to OpenGL ES (Section 9.2.3) This is controlled with setFiltering(int levelFilter,

for imageFilter, to use either point sampling or bilinear filtering within each mipmap

image For levelFilter, you can choose the same for nearest or linear filtering between

mipmap levels, or FILTER_BASE_LEVEL to use just the base-level image If you enable

mipmapping, the other mipmap levels will be automatically generated from the base-level

image However, all filtering beyond point-sampling the base level is optional; you will

encounter a lot of devices that do not even support mipmapping

Performance tip: Always enable mipmapping Not only does it make your graphics look

better, it also allows the underlying renderer to save valuable memory bandwidth and

spend less time drawing your better-looking graphics In rare cases, you may want to

opt for the small memory saving of not using mipmapping, but depending on the M3G

implementation, this saving may not even be realized in practice

Unlike mipmapping, choosing between FILTER_NEAREST and FILTER_

LINEARis a valid trade-off between quality and performance, especially when using a

software renderer

Texture application

Once the texture samples are fetched, they are combined with the input fragments

accord-ing to the texture blendaccord-ing function you choose—blendaccord-ing was a somewhat unfortunate

choice of name here, as it is easy to confuse with frame-buffer blending, but we shall

have to live with that The setBlending function lets you select one of FUNC_ADD,

FUNC_BLEND, FUNC_DECAL, FUNC_MODULATE, and FUNC_REPLACE These

directly correspond to the texture functions described in Section 3.4.1—refer there for

details on how each function works The texture blend color used by FUNC_DECAL can

be set via setBlendColor

As an example, a common case in texturing is combining a texture with per-vertex

light-ing; it makes no difference whether you have M3G compute the lighting dynamically or

use an off-line algorithm to bake the lighting into per-vertex colors—the texture is applied

Trang 8

the same way To do this, we only need to modulate (multiply) the interpolated fragment

colors with a texture Assuming we have, say, a repeating brick pattern in an Image2D called brickImage:

// Create a repeating texture image to multiply with the incoming color Texture2D myTexture = new Texture2D(brickImage);

myTexture.setWrapping(Texture2D.WRAP_REPEAT, Texture2D.WRAP_REPEAT); myTexture.setBlending(Texture2D.FUNC_MODULATE);

myTexture.setFiltering(Texture2D.FILTER_NEAREST,

Texture2D.FILTER_NEAREST);

// Set as the first texture to an Appearance object created earlier; // the other texture slots are assumed to be null

myAppearance.setTexture(0, myTexture);

In fact, WRAP_REPEAT and FUNC_MODULATE are the default settings for a Texture2D, so the related two lines in the example above could be skipped Depending

on your target hardware, you may also want to experiment with different filtering modes

to see which one is the best compromise between performance and image quality

Multi-texturing

If you are targeting an M3G implementation capable of multi-texturing, you may want

to bake your static lighting into a light map texture instead—this lets you get detailed

lighting without excess vertices, which can be useful if the vertex transformations would otherwise become the performance bottleneck; for example, if rasterization is hardware-accelerated but transformations are done in software If you have your light map in an Image2Dcalled lightmapImage, you could then implement the above example using two textures only, without any per-vertex colors or lighting:

// Create the textures for the brick pattern and our light map.

// We omit the default wrapping settings for the brick image;

// light maps do not normally repeat, so we clamp that Texture2D myTexture = new Texture2D(brickImage);

myTexture.setFiltering(Texture2D.FILTER_NEAREST,

Texture2D.FILTER_NEAREST);

Texture2D myLightmap = new Texture2D(lightmapImage);

myLightmap.setFiltering(Texture2D.FILTER_NEAREST,

Texture2D.FILTER_LINEAR);

myLightmap.setWrapping(Texture2D.WRAP_CLAMP, Texture2D.WRAP_CLAMP); // Create the final fragment color by just multiplying the two textures myAppearance.setTexture(0, myLightmap);

myLightmap.setBlending(Texture2D.FUNC_REPLACE);

myAppearance.setTexture(1, myTexture);

myTexture.setBlending(Texture2D.FUNC_MODULATE);

Note that you will also need to include texture coordinates in your VertexBuffer for each texturing unit you are using With multi-texturing, however, you may be able to share the same coordinates among many of your textures

Trang 9

LOW-LEVEL MODELING IN M3G

Pitfall: As in OpenGL, textures in M3G are applied after lighting If you want your

lighting to modulate the texture, which is the common case when representing surface

detail with textures, this only works well with diffuse reflection You should then render

a second, specular-only pass to get any kind of specular highlights on top of your texture,

or use multi-texturing and add a specular map Either of these will give you the effect you

can see in the specular highlight in Figure 3.2—compare the images with and without

a separate specular pass

Texture transformations

Now that we have covered the basics, note that the Texture2D class is derived from

Transformable This means that you can apply the full transformation functionality

to your texture coordinates prior to sampling the texture The transformation constructed

via the Transformable functions is applied to the texture coordinates in exactly the

same way as the modelview matrix is to vertex coordinates

Performance tip: The scale and bias parameters of VertexBuffer are all you should

need for normal texturing To avoid an unnecessary performance penalty, especially on

software-only implementations, limit the use of the texture transformation to special

effects that really need it

Finally, note that you can share the Image2D object used as the texture image with as

many Texture2D objects as you want This lets you use different texture

transforma-tions, or even different wrapping and filtering modes on the same image in different use

cases If the texture image is mutable, you can also render into it for dynamic effects

14.2.4 Fog

The next component of Appearance to affect your rendering results is Fog It is a

fairly simple simulation of atmospheric effects that gets applied to your fragments after

they have been textured Let us add some Fog into our Appearance:

myFog = new Fog();

myAppearance.setFog(myfog);

This creates a default black fog that obscures everything more than one unit away from

the camera, so it may not be very useful as such To get something more like atmospheric

perspective, let us set some parameters:

myFog.setMode(Fog.EXPONENTIAL};

myFog.setDensity(0.01f);

myFog.setColor(0x6688FF); // pale blue tint

We have a choice between two flavors in the setMode function: EXPONENTIAL and

LINEARfog For the former, we just set the density of the fog using setDensity The

Trang 10

latter has a linear ramp from no fog to fully fogged, specified with setLinear(float

near, float far) Finally, there is the fog color, set via setColor Refer to Section 3.4.4

for the details of fog arithmetic

Pitfall: Despite the name, setLinear does not make the fog LINEAR—you must set

the fog mode and parameters separately:

myFog.setMode(Fog.LINEAR);

myFog.setLinear(0.0, 10.0);

Note that there is no EXP2 fog mode in M3G, although it is frequently used in OpenGL (see Section 3.4.4) This was, again, done to drop one code path from proprietary software implementations; today, it may seem like rather an arbitrary choice

14.2.5 CompositingMode

After fog has been applied to your fragments, they are ready to hit the frame buffer By default, anything you render is depth-tested and, should the depth test pass, replaces the previously existing frame buffer values The CompositingMode class lets you control what is written to the frame buffer and how it blends with the existing pixels for com-positing and multi-pass rendering effects

myCompositingMode = new CompositingMode();

myAppearance.setCompositingMode(myCompositingMode);

Fragment tests

The first operation done on any fragment at the compositing stage is the alpha test M3G simplifies this down to a single threshold alpha value that your fragment must have in order to pass The threshold is set via setAlphaThreshold, and must have a value between zero and one Any fragment with an alpha value less than the threshold gets rejected right away The default value of0.0 lets all pixels pass A common use for the alpha channel is transparency, and you usually want to reject fragments with small alpha values so that the transparent regions do not mess up the depth buffer:

myCompositingMode.setAlphaThreshold(0.5f);

Note that this is equivalent to enabling the alpha test in OpenGL ES and calling glAlphaFunc(GL_GEQUAL, 0.5f) See Section 9.5.2 for more details on the OpenGL ES functionality

Performance tip: The alpha test is the fastest way to discard individual fragments, as

it does not require a comparison with the depth buffer For example, your rendering speed may improve by using alpha testing to discard transparent areas already before the blending stage In practice, many implementations detect these discarded fragments much earlier in the rendering pipeline, providing savings from other stages as well

Ngày đăng: 03/07/2014, 11:20

TỪ KHÓA LIÊN QUAN