As you can see in this example, the IK are applied on top of normal keyframed animation... The black line in Figure 11.7 defines all the points that this arm can reach, assuming that the
Trang 1246 Character Animation with Direct3D
TWO-JOINT INVERSE KINEMATICS
Now I’ll show you how to attack the Two-Joint “Reach” IK problem To solve this problem easier, you must take the information you know about people in general and put it to good use For example, in games the elbow joint is treated like a hinge joint with only one degree of freedom (1-DoF), while the shoulder joint is treated like a ball joint (3-DoF)
The fact that you treat the elbow (or knee) joint as a hinge makes this a whole lot simpler You know that the arm can be fully extended, completely bent, or something in between So, in other words, you know that the angle between the upper and lower arm has to be between 0 and 180 degrees This in turn makes it pretty easy for you to calculate the reach of an arm when you know the length of the upper and lower arm Consider Figure 11.7, for example
EXAMPLE 11.1
This is the first inverse kinematics example featuring a simple Look-At example The soldier will look at the mouse cursor just like in the earlier examples with the eyeballs, except in this example the head bone is manipulated
to turn the whole head to face the cursor As you can see in this example, the IK are applied on top of normal keyframed animation.
Trang 2The black line in Figure 11.7 defines all the points that this arm can reach, assuming that the elbow joint can bend from 0 to 180 degrees Let’s say that you’re trying to make your character reach a certain point with his arm Your first task is to figure out the angle of the elbow joint given the distance to the target Using the Law of Cosines, this becomes a pretty straightforward task, since you know the length of all sides of the triangle The formula for the Law of Cosines is:
C2= A2+ B2– 2ABcos(x)
Trivia: You might recognize part of the Law of Cosines as the Pythagorean Theorem Actually, the Pythagorean Theorem is a special case of the Law of Cosines where the angle x is 90 degrees Since the cosine for 90 degrees is zero, the term -2ABcos(x) can
be removed.
Chapter 11 Inverse Kinematics 247
FIGURE 11.7
Within an arm’s reach?
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 3Figure 11.7 shows the Law of Cosines applied to the elbow problem.
In Figure 11.8, C is known because it is the length from the shoulder to the IK target A and B are also known because they are simply the length of the upper and
lower arm So to solve the angle x, you just need to reorganize the Law of Cosines
as follows:
x = acos冉A 2+2AB B2 –C 2冊
First you have to bend the elbow to the angle that gives you the right “length.” Then you just rotate the shoulder (a ball joint, remember?) using the same simple Look-At IK approach covered in the previous example The ApplyArmIK()function has been added to the InverseKinematicsclass to do all this:
void InverseKinematics::ApplyArmIK(D3DXVECTOR3 &hingeAxis, D3DXVECTOR3 &target)
{ // Set up some vectors and positions D3DXVECTOR3 startPosition = D3DXVECTOR3(
m_pShoulderBone->CombinedTransformationMatrix._41, m_pShoulderBone->CombinedTransformationMatrix._42, m_pShoulderBone->CombinedTransformationMatrix._43); D3DXVECTOR3 jointPosition = D3DXVECTOR3(
m_pElbowBone->CombinedTransformationMatrix._41, m_pElbowBone->CombinedTransformationMatrix._42,
248 Character Animation with Direct3D
FIGURE 11.8
The Law of Cosines.
Trang 4D3DXVECTOR3 endPosition = D3DXVECTOR3(
m_pHandBone->CombinedTransformationMatrix._41, m_pHandBone->CombinedTransformationMatrix._42, m_pHandBone->CombinedTransformationMatrix._43);
D3DXVECTOR3 startToTarget = target - startPosition;
D3DXVECTOR3 startToJoint = jointPosition - startPosition;
D3DXVECTOR3 jointToEnd = endPosition - jointPosition;
float distStartToTarget = D3DXVec3Length(&startToTarget);
float distStartToJoint = D3DXVec3Length(&startToJoint);
float distJointToEnd = D3DXVec3Length(&jointToEnd);
// Calculate joint bone rotation // Calculate current angle and wanted angle float wantedJointAngle = 0.0f;
if(distStartToTarget >= distStartToJoint + distJointToEnd) {
// Target out of reach wantedJointAngle = D3DXToRadian(180.0f);
} else { //Calculate wanted joint angle (using the Law of Cosines) float cosAngle = (distStartToJoint * distStartToJoint +
distJointToEnd * distJointToEnd – distStartToTarget * distStartToTarget) / (2.0f * distStartToJoint * distJointToEnd); wantedJointAngle = acosf(cosAngle);
}
//Normalize vectors D3DXVECTOR3 nmlStartToJoint = startToJoint;
D3DXVECTOR3 nmlJointToEnd = jointToEnd;
D3DXVec3Normalize(&nmlStartToJoint, &nmlStartToJoint);
D3DXVec3Normalize(&nmlJointToEnd, &nmlJointToEnd);
//Calculate the current joint angle float currentJointAngle =
acosf(D3DXVec3Dot(&(-nmlStartToJoint), &nmlJointToEnd));
Chapter 11 Inverse Kinematics 249
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 5//Calculate rotation matrix float diffJointAngle = wantedJointAngle - currentJointAngle; D3DXMATRIX rotation;
D3DXMatrixRotationAxis(&rotation, &hingeAxis, diffJointAngle);
//Apply elbow transformation m_pElbowBone->TransformationMatrix = rotation *
m_pElbowBone->TransformationMatrix;
//Now the elbow “bending” has been done Next you just //need to rotate the shoulder using the Look-at IK algorithm
//Calcuate new end position //Calculate this in world position and transform //it later to start bones local space
D3DXMATRIX tempMatrix;
tempMatrix = m_pElbowBone->CombinedTransformationMatrix;
tempMatrix._41 = 0.0f;
tempMatrix._42 = 0.0f;
tempMatrix._43 = 0.0f;
tempMatrix._44 = 1.0f;
D3DXVECTOR3 worldHingeAxis;
D3DXVECTOR3 newJointToEnd;
D3DXVec3TransformCoord(&worldHingeAxis, &hingeAxis, &tempMatrix); D3DXMatrixRotationAxis(&rotation,&worldHingeAxis,diffJointAngle); D3DXVec3TransformCoord(&newJointToEnd, &jointToEnd, &rotation);
D3DXVECTOR3 newEndPosition;
D3DXVec3Add(&newEndPosition, &newJointToEnd, &jointPosition);
// Calculate start bone rotation D3DXMATRIX mtxToLocal;
D3DXMatrixInverse(&mtxToLocal, NULL,
&m_pShoulderBone->CombinedTransformationMatrix);
D3DXVECTOR3 localNewEnd; //Current end point D3DXVECTOR3 localTarget; //IK target in local space
D3DXVec3TransformCoord(&localNewEnd,&newEndPosition,&mtxToLocal); D3DXVec3TransformCoord(&localTarget, &target, &mtxToLocal);
D3DXVec3Normalize(&localNewEnd, &localNewEnd);
D3DXVec3Normalize(&localTarget, &localTarget);
250 Character Animation with Direct3D
Trang 6D3DXVECTOR3 localAxis;
D3DXVec3Cross(&localAxis, &localNewEnd, &localTarget);
if(D3DXVec3Length(&localAxis) == 0.0f) return;
D3DXVec3Normalize(&localAxis, &localAxis);
float localAngle = acosf(D3DXVec3Dot(&localNewEnd, &localTarget));
// Apply the rotation that makes the bone turn D3DXMatrixRotationAxis(&rotation, &localAxis, localAngle);
m_pShoulderBone->CombinedTransformationMatrix = rotation *
m_pShoulderBone->CombinedTransformationMatrix;
m_pShoulderBone->TransformationMatrix = rotation *
m_pShoulderBone->TransformationMatrix;
// Update matrices of child bones.
if(m_pShoulderBone->pFrameFirstChild) m_pSkinnedMesh->UpdateMatrices(
(BONE*)m_pShoulderBone->pFrameFirstChild,
&m_pShoulderBone->CombinedTransformationMatrix);
}
There! This humongous piece of code implements the concept of Two-Joint
IK as explained earlier As you can see in this function we apply any rotation of the joints both to the transformation matrix and the combined transformation matrix of the bone This is because the SkinnedMesh class recalculates the combined transformation matrix whenever the UpdateMatrices() function is called So if you haven’t applied the IK rotation to both matrices it would be lost when the UpdateMatrices()function is called
Chapter 11 Inverse Kinematics 251
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 7This chapter covered the basics of inverse kinematics (IK) and explained that as
a general problem it is quite tough to solve (even though there are quite a few approaches to doing so) I covered two specific IK applications for character animation: Look-At and Two-Joint “Reach” IK The Two-Joint IK can also be used for placing legs on uneven terrain, making a character reach for a game-world object, and much more
You would also need IK to make a character hold an object (such as a staff, for example) with both hands This could, of course, be done with normal keyframe animation as well, but it then often results in one hand not “holding on” perfectly and sometimes floating through the staff (due to interpolation between keyframes)
252 Character Animation with Direct3D
EXAMPLE 11.2
Example 11.2 has all the code for the Two-Joint IK solution covered in this section You move the target point around with the mouse, and the character will attempt to reach it with one arm Try to modify this example by limiting the freedom
of the shoulder joint so that the arm can’t move through the rest of the body Also, see
if you can apply Two-Joint IK to the other limbs (legs and other arm) as well.
Trang 8Hopefully this chapter served as a good IK primer for you to start implementing your own “hands-on” characters
This chapter essentially wraps up the many individual parts of character ani-mation in this book
CHAPTER 11 EXERCISES
Add weights to the IK functions enabling you to blend between keyframed animation and IK animation
A good next step for you would be to combine a keyframed animation such as opening a door with IK As the animation is in the state of holding the door handle, blend in the Two-Joint IK with the door handle as the IK target The soldier is holding the rifle with two hands Glue the other hand (the one that is not the parent of the rifle) to it using IK
Implement IK for the legs and make the character walk on uneven terrain Implement aiming for the soldier
FURTHER READING
[Melax00] Melax, Stan, “The Shortest Arc Quaternion,” Game Programming Gems.
Charles River Media, 2000
Chapter 11 Inverse Kinematics 253
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 9This page intentionally left blank
Trang 10Wrinkle Maps
12
I’ll admit that this chapter is a bit of a tangent and it won’t involve much animation code This chapter will cover the fairly recent invention of wrinkle maps In order
to make your future characters meet the high expectations of the average gamer out there, you need to know, at the very least, how to create and apply standard normal maps to your characters Wrinkle maps take the concept of normal maps one step further and add wrinkles to your characters as they talk, smile, or frown, etc Albeit this is a pretty subtle effect, it still adds that extra little thing missing to make your character seem more alive
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 11256 Character Animation with Direct3D
Before you get in contact with the wrinkle maps you need to have a solid understanding of how the more basic normal mapping technique works Even though normal mapping is a very common technique in today’s games, it is surprisingly hard to find good (i.e., approachable) tutorials and information about this online (again, I’m talking about the programming side of normal maps, there’s plenty of resources about the art side of this topic) I’m hoping this chapter will fill
a little bit of this gap
Normal mapping is a bump mapping technique—in other words, it can be used for making flat surfaces appear “bumpy.” Several programs make use of the
term bump map, which in most cases takes the form of a grayscale height map As
an object is rendered in one of these programs, a pixel is sampled from the height map (using the UV coordinates of the object) and used to offset the surface normal This in turn results in a variation of the amount of light this pixel receives Normal mapping is just one of the possible ways of doing this in real time (and is also
currently the de facto standard used in the games industry) Toward the end of the
chapter I’ll also show you how to add specular lighting to your lighting calculations (something that again adds a lot of realism to the end result)
In this chapter you will learn the basics of normal mapping and how to implement the more advanced wrinkle maps:
Introduction to normal maps How to create normal maps How to convert your geometry to accommodate normal mapping The real-time shader code needed for rendering
Specular lighting Wrinkle maps
INTRODUCTION TO NORMAL MAPPING
So far in the examples, the Soldier character has been lit by a single light The lighting calculation has thus far been carried out in the vertex shader, which is commonly known as vertex lighting Normal mapping, on the other hand, is a form of pixel lighting, where the lighting calculation is done on a pixel-by-pixel level instead of the coarser vertex level
How much the light affects a single vertex on the character (how lit it is) has been determined previously by the vertex normal Quite simply, if the normal faces the light source, the vertex is brightly lit; otherwise it is dark On a triangle level, this
Trang 12Chapter 12 Wrinkle Maps 257
means each triangle is affected by three vertices and their normals This also means that for large triangles there’s a lot of surface that shares relatively little lighting information Figure 12.1 demonstrates the problem with vertex-based lighting:
FIGURE 12.1
The problem with vertex-based lighting.
Please purchase PDF Split-Merge on www.verypdf.com to remove this watermark.
Trang 13258 Character Animation with Direct3D
As you can see in Figure 12.1, the concept of vertex lighting can become a problem in areas where triangles are sparse As the light moves over the triangle, it becomes apparent what the downside of vertex-based lighting is In the middle image the light is straight above the triangle, but since none of the triangle’s vertices are lit by the light, the entire triangle is rendered as dark, or unlit One common way to fight this problem is, of course, to add more triangles and subdivide areas that could be otherwise be modeled using fewer triangles
People still use vertex lighting wherever they can get away with it This is because any given lighting scheme usually runs faster in a vertex shader compared to a pixel shader, since you usually deal with fewer vertices than you do pixels (The exception
of this rule is, of course, when objects are far away from the camera, in which case some form of level of detail (LOD) scheme is used.) So in the case of character rendering, when you increase the complexity (add more triangles) to increase the lighting accuracy, you’re also getting the overhead of skinning calculations performed on each vertex, etc
So to increase the complexity of a character without adding more triangles, you must perform the lighting calculations on a pixel level rather than a vertex level This is where the normal maps come into the picture
W HAT A RE N ORMAL M APS ?
The clue is in the name A normal map stores a two-dimensional lookup table (or map) of normals In practice this takes the form of a texture, which in today’s shader technology can be uploaded and used in real time by the GPU The technique we use today in real-time applications, such as games, were first introduced in 1998 by
Cignoni et al in the paper “A general method for recovering attribute values on
sim-plified meshes.” This was a method of making a low-polygon version look similar to
a high-polygon version of the same object
It’s quite easy to understand the concept of a height map that is grayscale, where white (255) means a “high place,” and black (0) means a “low place.” Height maps have one channel to encode this information Normal maps, on the other hand, have three channels (R, G, and B) that encode the X, Y, and Z value of a normal This means that to get the normal at a certain pixel, we can just sample the RGB values from the normal map, transform it to X, Y, and Z, and then perform the lighting calculation based on this sampled normal instead of the normals from the vertices Generally speaking, there are two types of normal maps Either a normal map
is encoded in object space or in tangent space If the normal map stores normals
encoded in object space, it means that the normals are facing the direction that they do in the game world If the normals are stored in tangent space, the normals are stored relative to the surface normal of the object that they describe Figure 12.2 attempts to show this somewhat fuzzy concept