1. Trang chủ
  2. » Công Nghệ Thông Tin

3D Graphics with OpenGL ES and M3G- P32 pdf

10 171 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề 3D Graphics with OpenGL ES and M3G
Trường học Standard University
Chuyên ngành Computer Graphics
Thể loại Bài luận
Năm xuất bản 2023
Thành phố City Name
Định dạng
Số trang 10
Dung lượng 198,76 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

To render a full-screen view of the world, all we need to do within the bindTarget-releaseTargetblock is this: g3d.rendermyWorld; This takes care of clearing the depth buffer and color b

Trang 1

BASIC M3G CONCEPTS

Performance tip: On hardware-accelerated devices, it is a good idea to clear the color

buffer and depth buffer completely and in one go, even if the whole screen will be

redrawn Various hardware optimizations can only be enabled when starting from a

clean table, and some devices can clear the screen by just flipping a bit Even with

soft-ware implementations, clearing everything is typically faster than clearing a slightly

smaller viewport

Note that the viewport need not lie inside the render target; it can even be larger than

the render target The view that you render is automatically scaled to fit the viewport

For example, if you have a viewport of 1024 by 1024 pixels on a QVGA screen, you will

only see about 7% of the rendered image (the nonvisible parts are not really rendered, of

course, so there is no performance penalty); see the code example in Section 13.1.4 The

maximum size allowed for the viewport does not depend on the type of rendering target,

but only on the implementation All implementations are required to support viewports

up to 256 by 256 pixels, but in practice the upper bound is 1024 by 1024 or higher The

exact limit can be queried from Graphics3D.getProperties

Pitfall: Contrary to OpenGL ES, there is no separate function for setting the scissor

rect-angle (see Section 3.5) Instead, the scissor rectrect-angle is implicitly defined as the

inter-section of the viewport and the Graphics clipping rectangle

A concept closely related to the viewport is the depth range, set by

setDepthRange(float near, float far), where near and far are in the range [0, 1]

Similar to the viewport, the depth range also defines a mapping from normalized device

coordinates to screen coordinates, only this time the screen coordinates are depth values

that lie in the [0, 1] range Section 2.6 gives insight on the depth range and how it can be

used to make better use of the depth buffer resolution or to speed up your application

13.1.3 RENDERING

As we move toward more ambitious goals than merely clearing the screen, the next

step is to render some 3D content For simplicity, let us just assume that we have

mag-ically come into possession of a complete 3D scene that is all set up for rendering

In case of M3G, this means that we have a World object, which is the root node of

the scene graph and includes by reference all the cameras, lights, and polygon meshes

that we need To render a full-screen view of the world, all we need to do within the

bindTarget-releaseTargetblock is this:

g3d.render(myWorld);

This takes care of clearing the depth buffer and color buffer, setting up the camera and

lights, and finally rendering everything that there is to render This is called retained-mode

rendering, because all the information necessary for rendering is retained by the World

Trang 2

and its descendants in the scene graph In the immediate mode, you would first clear

the screen, then set up the camera and lights, and finally draw your meshes one by one

in a loop

The retained mode and immediate mode are designed so that you can easily mix and match them in the same application Although the retained mode has less overhead on the Java side, and is generally recommended, it may sometimes be more convenient to handle overlays, particle effects, or the player character, for instance, separately from the rest of the scene To ease the transition from retained mode to immediate mode at the end

of the frame, the camera and lights of the World are automatically set up as the current camera and lights in Graphics3D, overwriting the previous settings

The projection matrix (see Chapter 2) is defined in a Camera object, which in turn is attached to Graphics3D using setCamera(Camera camera, Transform

trans-form) The latter parameter specifies the transformation from camera space, also known

as eye space or view space, into world space The Camera class is described in detail in Section 14.3.1 For now, it suffices to say that it allows you to set an arbitrary4×4 matrix, but also provides convenient methods for defining the typical perspective and parallel projections The following example defines a perspective projection with a60◦ vertical

field of view and the same aspect ratio as the Canvas that we are rendering to:

Camera camera = new Camera();

float width = myCanvas.getWidth();

float height = myCanvas.getHeight();

camera.setPerspective(60.0f, width/height, 10.0f, 500.0f);

g3d.setCamera(camera, null);

Note that we call setCamera with the transform parameter set to null As a general principle in M3G, a null transformation is treated as identity, which in this case implies

that the camera is sitting at the world-space origin, looking toward the negative Z axis with Y pointing up.

Light sources are set up similarly to the camera, using addLight(Light light,

Trans-formtransform) The transform parameter again specifies the transformation from local

coordinates to world space Lighting is discussed in Section 14.3.2, but for the sake of illustration, let us set up a single directional white light that shines in the direction at which our camera is pointing:

Light light = new Light();

g3d.addLight(light, null);

Now that the camera and lights are all set up, we can proceed with rendering There are three different render methods in immediate mode, one having a higher level of

abs-traction than the other two The high-level method render(Node node, Transform transform) draws an individual object or scene graph branch You can go as far as

ren-dering an entire World with it, as long as the camera and lights are properly set up in

Trang 3

BASIC M3G CONCEPTS

Graphics3D For instance, viewing myWorld with the camera that we just placed at

the world space origin is as simple as this:

g3d.render(myWorld, null);

Of course, the typical way of using this method is to draw individual meshes rather

than entire scenes, but that decision is up to you The low-level render methods, on

the other hand, are restricted to drawing a single triangle mesh The mesh is defined

by a vertex buffer, an index buffer, and an appearance As with the camera and lights, a

transformation from model space to world space must be given as the final parameter:

g3d.render(myVertices, myIndices, myAppearance, myTransform);

The other render variant is similar, but takes in an integer scope mask as an additional

parameter The scope mask is bitwise-ANDed with the corresponding mask of the current

camera, and the mesh is rendered if and only if the result is non-zero The same applies for

lights The scope mask is discussed further in Chapter 15, as it is more useful in retained

mode than in immediate mode

13.1.4 STATIC PROPERTIES

We mentioned in Section 12.2 that there is a static getter for retrieving

implementation-specific information, such as whether antialiasing is supported This special getter is

defined in Graphics3D, and is called getProperties It returns a java.util

Hashtablethat contains Integer and Boolean values keyed by Strings The

static properties, along with some helpful notes, are listed in Table 13.1 To illustrate the

use of static properties, let us create a viewport that is as large as the implementation

can support, and use it to zoom in on a high-resolution rendering of myWorld:

Hashtable properties = Graphics3D.getProperties();

maxViewport = ((Integer)properties.get("maxViewportDimension")).

intValue();

g3d.bindTarget(graphics, true, hints);

int topLeftX = — (maxViewport — graphics.getClipWidth())/2;

int topLeftY = — (maxViewport — graphics.getClipHeight())/2;

g3d.setViewport(topLeftX, topLeftY, maxViewport, maxViewport);

g3d.render(myWorld);

g3d.releaseTarget();

We first query for maxViewportDimension from the Hashtable The value is

returned as a java.lang.Object, which we need to cast into an Integer and

then convert into a primitive int before we can use it in computations Later on, at the

paintmethod, we set the viewport to its maximum size, so that our Canvas lies at

its center Assuming a QVGA screen and a 1024-pixel-square viewport, we would have

a zoom factor of about 14 The zoomed-in view can be easily panned by adjusting the

top-left X and Y.

Trang 4

T a b l e 13.1: The system properties contained in the Hashtable returned by Graphics3D.getProperties There may be other properties, as well, but they are not standardized.

supportAntialiasing Boolean trueon some hardware-accelerated devices supportTrueColor Boolean falseon all devices that we know of

supportDithering Boolean falseon all devices that we know of

supportMipmapping Boolean falseon surprisingly many devices

supportPerspectiveCorrection Boolean trueon all devices, but quality varies

supportLocalCameraLighting Boolean falseon almost all devices

maxViewportWidth Integer≥ 256 typically 256 or 1024; M3G 1.1 only

maxViewportHeight Integer≥ 256 typically 256 or 1024; M3G 1.1 only

maxViewportDimension Integer≥ 256 typically 256 or 1024

maxTextureDimension Integer≥ 256 typically 256 or 1024

maxSpriteCropDimension Integer≥ 256 typically 256 or 1024

maxTransformsPerVertex Integer≥ 2 typically 2, 3, or 4

13.2 Image2D

There are a few cases where M3G deals with 2D image data Texturing, sprites, and background images need images as sources, and rendering to any of them is also supported

Image2D, as the name suggests, stores a 2D array of image data It is similar in many respects to the javax.microedition.lcdui.Image class, but the important dif-ference is that Image2D objects are fully managed by M3G This lets M3G implementa-tions achieve better performance, as there is no need to synchronize with the 2D drawing functions in MIDP

Similarly to the MIDP Image, an Image2D object can be either mutable or immutable

To create an immutable image, you must supply the image data in the constructor:

Image2D(intformat,intwidth,intheight,byte[]image)

The format parameter specifies the type of the image data: it can be one of ALPHA,

LUMINANCE, LUMINANCE_ALPHA, RGB, and RGBA The width and height parame-ters determine the size of the image, and the image array contains data for a total of width × height pixels The layout of each pixel is determined by format: each image

com-ponent takes one byte and the comcom-ponents are interleaved For example, the data for

Trang 5

BASIC M3G CONCEPTS

a LUMINANCE_ALPHA image would consist of two bytes giving the luminance and

alpha of the first pixel, followed by two bytes giving the luminance and alpha of

the second pixel, and so on The pixels are ordered top-down and left to right, i.e.,

the first width pixels provide the topmost row of the image starting from the left.

Upon calling the constructor, the data is copied into internal memory allocated by

M3G, allowing you to discard or reuse the source array Note that while the image is

input upside-down compared to OpenGL ES (Section 9.2.2), the t texture coordinate

is similarly reversed, so the net effect is that you can use the same texture images

and coordinates on both OpenGL ES and M3G

Unfortunately, there is no support in M3G for packed image formats, such as RGB565

This is partially because OpenGL ES does not give any guarantees regarding the internal

color depth of a texture image, but also because the image formats were intentionally

kept few and simple In retrospect, being able to input the image data in a packed format

would have been useful in its own right, regardless of what happens when the image is

sent to OpenGL ES

As a form of image compression, you can also create a paletted image:

Image2D(intformat,intwidth,intheight,byte[]image,byte[]palette)

Here, the only difference is that the image array contains one-byte indices into the palette

array, which stores up to 256 color values The layout of the color values is again as

indi-cated by format There is no guarantee that the implementation will internally maintain

the image in the paletted format, though

Pitfall: The amount of memory that an Image2D consumes is hard to predict.

Depending on the device, non-palletized RGB and RGBA images may be stored at 16

or 32 bits per pixel, while palletized images are sometimes expanded from 8 bpp to

16 or 32 bpp Some implementations always generate the mipmap pyramid,

consum-ing 33% extra memory Some devices need to store two copies of each image: one in

the GL driver, the other on the M3G side Finally, all or part of this memory may be

allocated from somewhere other than the Java heap This means that you can run out

of memory even if the Java heap has plenty of free space! You can try to detect this

case by using smaller images As for remedies, specific texture formats may be more

space-efficient than others, but you should refer to the developer pages of the device

manufacturers for details

A third constructor lets you copy the data from a MIDP Image:

Image2D(intformat,java.lang.Objectimage)

Note that the destination format is explicitly specified The source format is either RGB

or RGBA, for mutable and immutable MIDP images, respectively Upon copying the

data, M3G automatically converts it from the source format into the destination format

Trang 6

As a general rule, the conversion happens by copying the respective components of the source image and setting any missing components to 1.0 (or 0xFF for 8-bit colors) A cou-ple of special cases deserve to be mentioned When converting an RGB or RGBA source image into LUMINANCE or LUMINANCE_ALPHA, the luminance channel is obtained

by converting the RGB values into grayscale A similar conversion is done when convert-ing an RGB image into ALPHA This lets you read an alpha mask from a regular PNG

or JPEG image through Image.createImage, or create one with the 2D drawing functions of MIDP, for example

Often the most convenient way to create an Image2D is to load it from a file You can

do that with the Loader, as discussed in Section 13.5 All implementations are required

to support the M3G and PNG formats, but JPEG is often supported as well.4 Loading

an image file yields a new Image2D whose format matches that of the image stored in the file JPEG can do both color and grayscale, yielding the internal formats RGB and LUMINANCE, respectively, but has no concept of transparency or alpha PNG supports all of the Image2D formats except for ALPHA It has a palletized format, too, but unfor-tunately the on-device PNG loaders tend to expand such data into raw RGB or RGBA before it ever reaches the Image2D M3G files obviously support all the available for-mats, including those with a palette

Pitfall: The various forms of transparency supported by PNG are hard to get right.

For example, the M3G loader in some early Nokia models (e.g., the 6630), does not support any form of PNG transparency, whereas some later models (e.g., the 6680) support the alpha channel but not color-keying Possible workarounds include using Image.createImageor switching from PNG files to M3G files These issues have been resolved in M3G 1.1; see Section 12.3

Finally, you can create a mutable Image2D:

Image2D(intformat,intwidth,intheight) The image is initialized to opaque white by default It can be subsequently modified by

using set(int x, int y, int width, int height, byte[] pixels) This method copies

a rectangle of width by height pixels into the image, starting at the pixel at (x, y) and

proceeding to the right and down The origin for the Image2D is in its top left corner

A mutable Image2D can also be bound to Graphics3D as a rendering target The image can still be used like an immutable Image2D This lets you, for example, render dynamic reflections or create feedback effects

4 JPEG support is in fact required by the Mobile Service Architecture (MSA) specification, also known as JSR 248 MSA is an umbrella JSR that aims to unify the Java ME platform.

Trang 7

BASIC M3G CONCEPTS

T a b l e 13.2: The availableImage2Dformats and their capabilities The shaded cells show the capabilities of most

devices, as these cases are not dictated by the specification Mipmapping is entirely optional, and palletized images may

be silently expanded into the corresponding raw formats, typicallyRGB Most devices support mipmapping and palletized images otherwise, but will not generate mipmaps for palletized textures, nor load a palletized PNG without expanding it There are some devices that can do better, though Finally, note that JPEG does not support the alpha channel, and that

PNG does not support images with only an alpha channel.

Load from M3G

Load from PNG

Load from JPEG

Copy from Image

Render Target

Back-ground







Performance tip: Beware that updating an Image2D, whether done by rendering or

through the setter, can be a very costly operation For example, the internal format

and layout of the image may not be the same as in the set method, requiring heavy

conversion and pixel reordering If your frame rate or memory usage on a particular

device is not what you would expect, try using immutable images only

While Image2D is a general-purpose class as such, there are various restrictions on what

kind of images can be used for a specific purpose For example, textures must have

power-of-two dimensions, and render targets can only be in RGB or RGBA formats Table 13.2

summarizes the capabilities and restrictions of the different Image2D formats

13.3 MATRICES AND TRANSFORMATIONS

One of the most frequently asked questions about M3G is the difference between

Transformand Transformable The short answer is that Transform is a

sim-ple container for a4 × 4 matrix with no inherent meaning, essentially a float array

wrapped into an object, whereas Transformable stores such a matrix in a

compo-nentized, animatable form, and for a particular purpose: constructing the modelview

matrix or the texture matrix The rest of this section provides the long answer

13.3.1 Transform

Transformstores an arbitrary4 × 4 matrix and defines a set of basic utility functions

for operating on such matrices You can initialize a Transform to identity, copy it in

Trang 8

from another Transform, or copy it from a float[] in row-major order (note that this is different from OpenGL ES, which uses the unintuitive column-major ordering) setIdentityresets a Transform back to its default state, facilitating object reuse

Creating a matrix

To give an example, the following code fragment creates a matrix with a uniform scaling component[ 2 2 2 ] and a translation component [ 3 4 5 ] In other words, a vector multiplied by this matrix is first scaled by a factor of two, then moved by three units

along the x axis, four units along y, and five units along z:

Transform myTransform = new Transform();

myTransform.set(new float[] { 2f, 0f, 0f, 3f,

0f, 2f, 0f, 4f, 0f, 0f, 2f, 5f, 0f, 0f, 0f, 1f });

Matrix operations

Once you have created a Transform, you can start applying some basic arithmetic functions to it: You can transpose the matrix (M= M T), invert it (M= M−1),

or multiply it with another matrix (M = M A) Note that each of these operations

overwrites the pre-existing value of the Transform with the result (M) The matrix

multiplication functions come in several flavors:

void postMultiply(Transformtransform)

void postScale(floatsx,floatsy,floatsz)

void postTranslate(floattx,floatty,floattz)

void postRotate(floatangle,floatax,floatay,floataz)

void postRotateQuat(floatqx,floatqy,floatqz,floatqw)

The post prefix indicates that the matrix is multiplied from the right by the given matrix

(e.g.,M = M A); pre would mean multiplying from the left (e.g., M = A M), but

there are no such methods in Transform Going through the list of methods above, the first three probably need no deeper explanation The rotation method comes in two varieties: postRotateQuat uses a quaternion to represent the rotation (see Section

2.3.1), whereas postRotate uses the axis-angle format: looking along the positive

rotation axis

ax ay az

, the rotation is angle degrees clockwise.

To make things more concrete, let us use postScale and postTranslate to con-struct the same matrix that we typed in manually in the previous example:

Transform myTransform = new Transform();

myTransform.postTranslate(3f, 4f, 5f);

myTransform.postScale(2f, 2f, 2f);

Trang 9

BASIC M3G CONCEPTS

Transforming vertices

As in OpenGL, you should think that the matrix operations apply to vertices in the

reverse order that they are written If you apply the transformationT S to a vertex v,

the vertex is first scaled and then translated:T (S v) Let us write out the matrices and

confirm that the above code fragment does indeed yield the correct result:

M= I T S =

1 0 0 0

0 1 0 0

0 0 1 0

0 0 0 1

1 0 0 3

0 1 0 4

0 0 1 5

0 0 0 1

2 0 0 0

0 2 0 0

0 0 2 0

0 0 0 1

⎦=

2 0 0 3

0 2 0 4

0 0 2 5

0 0 0 1

One of the most obvious things to do with a transformation matrix is to transform

an array of vectors with it The Transform class defines two convenience methods

for this purpose The first, transform(float[] vectors) multiplies each 4-element

vector in the vectors array by this matrix and overwrites the original vectors with the

results (v = M v, where v is a column vector) The other transform variant is a bit

more complicated:

void transform(VertexArrayin,float[]out , boolean w)

Here, we take in 2D or 3D vectors in a VertexArray, set the fourth component to zero

or one depending on the w parameter, and write the transformed 4-element vectors to

the out array The input array remains unmodified.

The transform methods are provided mostly for convenience, as they play no role in

rendering or any other function of the API Nonetheless, if you have a large number of

vec-tors that you need to multiply with a matrix for whatever purpose, these built-in methods

are likely to perform better than doing the same thing in Java code The VertexArray

variant also serves a more peculiar purpose: it is the only way to read back vertices from a

VertexArrayon many devices, as the necessary VertexArray.get methods were

only added in M3G 1.1

Other use cases

Now that you know how to set up Transform objects and use them to transform

ver-tices, let us look at what else you can use them for First of all, in Graphics3D you

need them to specify the local-to-world transformations of the immediate-mode camera,

lights, and meshes In both immediate mode and retained mode, you need a Transform

to set up an oblique or otherwise special projection in Camera, or any kind of

projec-tion for texture coordinates in Texture2D Finally, you can (but do not have to) use a

Transformin the local-to-parent transformation of a Node Each of these cases will

come up later on in this book

Trang 10

13.3.2 Transformable

Transformable is an abstract base class for the scene graph objects Node and Texture2D Conceptually, it is a4 × 4 matrix representing a node transformation or a texture coordinate transformation The matrix is made up of four components that can be manipulated separately: translationT, orientation R, scale S, and a generic 4 × 4 matrix

M During rendering, and otherwise when necessary, M3G multiplies the components

together to yield the composite transformation:

A homogeneous vectorp =x y z wT

, representing a vertex coordinate or texture coordinate, is then transformed intop=x y z wT

by:

The components are kept separate so that they can be controlled and animated indepen-dent of each other and indepenindepen-dent of their previous values For example, it makes no difference whether you first adjustS and then T, or vice versa; the only thing that matters

is what values the components have whenC needs to be recomputed Contrast this with

the corresponding operations in Transform, which are in fact matrix multiplications and thus very much order-dependent

Note that for node transformations, the bottom row of theM component is restricted

to[ 0 0 0 1 ]—in other words, projections are not allowed in the scene graph Texture matrices do not have this limitation, so projective texture mapping is fully supported (see Section 3.4.3)

Methods

The following four methods in Transformable allow you to set the transformation components:

void setTranslation(floattx,floatty,floattz)

void setOrientation(floatangle,floatax,floatay,floataz)

void setScale(floatsx,floatsy,floatsz)

void setTransform(Transformtransform) The complementary methods, translate, preRotate, postRotate, and scale, each modify the current value of the respective component by applying an additional translation, rotation, or scaling The user-provided rotation can be applied to the left

Ngày đăng: 03/07/2014, 11:20

TỪ KHÓA LIÊN QUAN