Animplementation of a synthetic vision module based on two viewports rendered in real-time, one representing static information and the other dynamic, with false colouringbeing used for
Trang 1An Investigation Into the Use of
Universidad de Buenos Aires
Argentina September 2002
Trang 3The role and utility of synthetic vision in computer games is discussed Animplementation of a synthetic vision module based on two viewports rendered in real-time, one representing static information and the other dynamic, with false colouringbeing used for object identification, depth information and movement representation ispresented The utility of this synthetic vision module is demonstrated by using it as input
to a simple rule-based AI module that controls agent behavior in a first-person shootergame
Son discutidos en esta tesis la utilidad y el rol de la visión sintética en juegos decomputadora Se presenta una implementación de un módulo de visión sintética basado
en dos viewports renderizados en tiempo real, uno representando información estática y
el otro dinámica, utilizando colores falsos para identificación de objetos, información deprofundidad y representación del movimiento La utilidad de este módulo de visiónsintética es demostrada utilizándolo como entrada a un módulo simple de IA basado enreglas que controla el comportamiento de un agente en un juego de disparos en primerapersona
Trang 5Table of Contents
TABLE OF CONTENTS 4
ACKNOWLEDGMENTS 7
INTRODUCTION 8
GENERAL OVERVIEW 8
IN DEPTH PRE-ANALYSIS 8
PREVIOUS WORK 11
PROBLEM STATEMENT 13
SYNTHETIC VISION MODEL 14
STATIC VIEWPORT 14
DEFINITIONS 14
LEVEL GEOMETRY 15
DEPTH 16
DYNAMIC VIEWPORT 17
BUFFERS 18
REMARKS 18
BRAIN MODULE: AI 20
AI MODULE 20
FPS DEFINITION 20
MAIN NPC 21
POWER-UPS 23
BRONTO BEHAVIOUR 25
BEHAVIOUR STATES SOLUTION 26
DESTINATION CALCULATION 27
BEZIER CURVE GENERATION 27
WALK AROUND 28
LOOKINGFORA SPECIFIC POWER-UP 36
Trang 6LOOKINGFORANY POWER-UP 37
LOOKING QUICKLYFORA SPECIFIC POWER-UP 37
KNOWN PROBLEMS 37
EXTENDED BEHAVIOUR WITH DYNAMIC REACTIONS 43
DON’T WORRY 44
AVOID 44
INTERCEPT 44
DEMONSTRATION 44
LAST WORDABOUT DYNAMIC AI 44
GAME APPLICATIONS ANALYSIS 45
ADVENTURES 45
FIRST PERSON SHOOTERS 45
THIRD PERSON ACTION GAMES 45
ROLE PLAY GAMES 45
REAL TIME STRATEGY 46
FLIGHT SIMULATIONS 46
OTHER SCENERIES 46
CONCLUSIONS 47
FUTURE WORK 48
REFERENCES 50
APPENDIX A – CD CONTENTS 52
APPENDIX B – IMPLEMENTATION 53
FLY3D_ENGINE CLASSES 53
FLYBEZIERPATCHCLASS 53
FLYBSPOBJECTCLASS 53
FLYENGINECLASS 53
FLYFACECLASS 53
LIGHTS CLASSES 54
SPRITE_LIGHTCLASS 54
SVISION CLASSES 54
AICLASS 54
Trang 7VISIONCLASS 55
VIEWPORT CLASSES 55
VIEWPORTCLASS 55
WALK CLASSES 55
CAMERACLASS 56
CAMERA2 CLASS 56
OBJECTCLASS 56
PERSONCLASS 56
POWERUPCLASS 57
APPENDIX C – SOFTWARE USERS’ GUIDE 58
SYSTEM REQUIREMENTS 58
INSTALLING DIRECTX 58
CONFIGURING FLY3D 58
RUNNING FLY3D 58
RUNNING SYNTHETIC VISION LEVELS 59
MODIFYING SYNTHETIC VISION LEVELS PROPERTIES 61
APPENDIX D – GLOSSARY 63
APPENDIX E - LIST OF FIGURES 64
Appendix F - List of Tables 66
Trang 8I will be forever grateful with Alan Watt, who accepted to guide me through the wholeproject, from the very beginning I won’t ever forget all the help he offered me during mylittle trip to Sheffield, despite the hard time he was having Special thanks to his wife,Dionéa, who is an extremely kind person
From Sheffield as well, I want to thank Manuel Sánchez and James Edge, for being veryfriendly and easy-going Special thanks to Steve Maddock for all the help and support hegave to me, and for the “focusing” talk that we had
Very special thanks to Fabio Policarpo, for many reasons: he gave me access to the earlyfull source code and following versions of the engine; he helped me on each piece of codewhen I was stuck or when I just didn’t know what to do; he even gave me a place in hisoffices at Niterói Furthermore, all the people from Paralelo –Gilliard Lopes, Marcos, etc.-and Fabio’s friends were very kind with me Passion for games can be smelled insideParalelo’s offices
I shall be forever in debt with Alan Cyment, Germán Batista and Javier Granada formaking possible the presentation of this thesis in English
Thanks to all the professors from the Computer Sciences Department who make adifference, like Gabriel Wainer, who not only teaches computer sciences, but a workphilosophy as well
I must mention all of my university partners and friends, with whom I shared manysuffered funny study days and nights In special to Ezequiel Glinsky, an incredibleteammate who was next to me on every academic step
Thank you Cecilia for standing by my side all of these years
I don’t want to forget to give thanks to Irene Loiseau, who initiated contact with Alan; and
to all the people from the ECI 2001 committee, who accepted my suggestion to invite Alan
to come to Argentina to give a very nice, interesting, and successful course
And, finally, very special thanks to Claudio Delrieux, who strengthened my passion forcomputer graphics He was always ready to help me unconditionally before and duringevery stage of this thesis
Trang 9General Overview
Today, 3D computer games usually uses artificial intelligence (AI) for the non-playercharacters (NPC’s) taking information directly from the internal database They controlthe NPC’s movements and actions having knowledge of the whole world information,probably cheating if the developer does not put any constraint over this
Since a large quantity of computer controlled opponents or friends are human-like, itseems to be interesting and logic to give senses to those characters This means that thecharacters could have complete or partial systems, such as vision, aural, tactile, smell,and taste They could then process the information sensed from those systems in a brainmodule, learn about the world, and act depending on the character’s personality, feelings
and needs The character becomes a synthetic character that lives in a virtual world.
The research field of synthetic characters or autonomous agents investigates the use ofsenses combined with personality in order to make characters’ behaviour more realistic,using cognitive memories and rule based systems, producing agents that seem to be aliveand interacting in their own world, and maybe with some human interaction
However, not too much effort has been given to the investigation of the use of syntheticcharacters in real time 3D computer games In this thesis, we propose a vision system forNPC’s, i.e a synthetic vision, and analyze how useful and feasible its usage might prove
in the computer games’ industry
We think that the use of synthetic vision together with complex brain modules couldimprove gameplay and make for better and more realistic NPC’s
We have to note that our efforts were focused on the investigation of the use of thesynthetic vision, and not to the AI that uses it For that reason, we developed only asimple AI module in a 3D engine, where we implemented our vision approach
In Depth Pre-Analysis
We can regard synthetic vision as a process that supplies an autonomous agent with a 2Dview of his environment The term synthetic vision is used because we bypass the classic
computer vision problems As Thalmann et al [Thal96] point out, we skip the problems of
distance detection, pattern recognition and noisy images that would be appertain forvision computations for real robots Instead computer vision issues are addressed in thefollowing ways:
1) Depth perception – we can supply pixel depth as part of the synthetic vision of anautonomous agent’s vision The actual position of objects in the agent’s field of view isthen available by inverting the modeling and projection transform
2) Object recognition – we can supply object function or identity as part of the syntheticvision system
Trang 10vision viewport.
Thus the agent AI is supplied with a high-level vision system rather than an unprocessedview of the environment As an example, instead of just rendering the agent’s view into aviewport then having an AI interpret the view, we instead render objects in a colour thatreflects their function or identity (although there is nothing to prevent an implementationwhere the agent AI has to interpret depth – from binocular vision, say, and also recognizeobjects) With object identity, depth and velocity presented, the synthetic vision becomes
a plan of the world as seen from the agent’s viewpoint
We can also consider synthetic vision in relation to a program that controls anautonomous agent by accessing the game database and the current state of play Often agame database will be tagged with extra pre-calculated information so that anautonomous agent can give a game player an effective opponent For instance, areas ofthe database may be tagged as good hiding places (they may be shadow areas), or pre-calculated journey paths from one database node to another may be stored In usingsynthetic vision, we change the way the AI works, from a prepared, programmer-orientedbehaviour to the possibility of novel, unpredictable behaviour
A number of advantages accrue from allowing an autonomous agent to perceive hisenvironment via a synthetic vision module First, it may enable an AI architecture for anautonomous agent that is more ‘realistic’ and easier to build Here we refer to an ‘onboard’ AI for each autonomous agent Such an AI can interpret what is seen by the
character, and only what is seen Isla and Blumberg [Isla02] refer to this as sensory
honesty and point out that it “…forces a separation between the actual state of the world
and the character’s view of the state of the world” Thus the synthetic vision may render
an object but not what is behind it
Second, a number of common games operations can be controlled by synthetic vision A
synthetic vision can be used to implement local navigation tasks such as obstacle
avoidance Here the agent’s global path through a game level may be controlled by a highlevel module (such as A* path planning or game logic) The local navigation task may be toattempt to follow this path by taking local deviations where appropriate Also, syntheticvision can be used to reduce collision checking In a games engine this is normally carriedout every frame by checking the player's bounding box against the polygons of the level orany other dynamic object Clearly if there is free space ahead you do not need every-framecollision checking
Third, easy agent-directed control of the synthetic vision module may be possible, forexample, look around to resolve a query, or follow the path of a moving object In the case
of the former this is routinely handled as a rendering operation A synthetic vision canalso function as part of a method for implementing inter-agent behaviour
Thus, the provision of synthetic vision reduces to a specialized rendering which meansthat the same technology developed for fast real-time rendering of complex scenes isexploited in the synthetic vision module This means that real-time implementation isstraightforward
However, despite the ease of producing a synthetic vision, it seems to be only anoccasionally employed model in computer games and virtual reality Tu and Terzopoulous[TuTe94] made an early attempt at synthetic vision for artificial fishes The emphasis ofthis work is a physics-based model and reactive behaviour such as obstacle avoidance,escaping and schooling Fishes are equipped with a “cyclopean” vision system with a 300
Trang 11degree field of view In their system an object is “seen” if any part of it enters the view
volume and is not fully occluded by another object Terzopoulos et al [Terz96] followed this with a vision system that is less synthetic in that the fishes’ vision system is initially presented with retinal images which are binocular photorealistic renderings Computer
vision algorithms are then used to accomplish, for example, predator recognition Thiswork thus attempts to model, to some extent, the animal visual processes rather thanbypassing these by rendering semantic information into the viewport
In contrast, a simpler approach is to use false colouring in the rendering to representsemantic information Blumberg [Blum97] use this approach in a synthetic vision based
on image motion energy that is used for obstacle avoidance and low-level navigation Aformula derived from image frames is used to steer the agent, which, in this case, is a
virtual dog Noser et al [Nose95] use false colouring to represent object identity, and in
addition, introduce a dynamic octree to represent the visual memory of the agent Kuffnerand Latombe [Kuff99] also discuss the role of memory in perception-based navigation,with an agent planning a path based on its learned model of the world
One of the main aspects that must be addressed for computer games is to make thesynthetic vision fast enough We achieve this, as discussed above, by making use ofexisting real-time rendering speed-ups as used to provide the game player with a view ofthe world We propose that two viewports can be effectively used to provide for twodifferent kinds of semantic information Both use false colouring to present a renderedview of the autonomous agent’s field of view The first viewport represents staticinformation and the second viewport represents dynamic information Together the twoviewports can be used to control agent behaviour We discuss only the implementation of
simple memoryless reactive behaviour; although far more complex behaviour is
implementable by exploiting synthetic vision together with the consideration of memoryand learning Our synthetic vision module is used to demonstrate low-level navigation,fast object recognition, fast dynamic object recognition and obstacle avoidance
Trang 12Previous Work
Beyond the mere mention of present research in the previous section, it is necessary togive a short description of each of the relevant works on the field
We can think of the following two opposing approaches:
Pure Synthetic Vision, also called Artificial Vision in [Nose95b], quoting the samereference, it is “a process of recognizing the image of the real environment captured by
a camera (…) is an important research topic in robotics and artificial intelligence.”
No Vision At All, it est., not using any vision system for our characters
We can imagine a straight line, with Pure Synthetic Vision in one extreme, and No Vision
At All in the other end Each of the previous approaches falls somewhere in between
Figure 1 Approaches: a graphical view.
Bruce Blumberg describes in [Blum97a] a synthetic vision based on motion energy that
he uses for his autonomous characters for obstacle avoidance and low-level navigation
He renders the scene with false colouring, taking the information from a weightedformulae that is a combination of flow (pixel colours from the last frames) and mass(based on textures), dividing the image in half, and taking differences in order to steer Adetailed implementation can be found in [Blum97b] Related research on SyntheticCharacters can be found in [Blum01]
James Kuffner presents in [Kuff99a] a false colouring approach that he uses for digitalactors navigation with collisions detection, implementing visual memory as well Detailscan be found in [Kuff99b] For a fast-related introduction and other papers you cannavigate through [Kuff01]
Hansrudi Noser et al use in [Nose95a] synthetic vision for digital actors navigation Vision
is the only connection between environment and actors Obstacle avoidance as well asknowledge representation, learning and forgetting problems are solved based on actorsvision system A voxels-based memory is used to do all these tasks and path searching aswell Their vision representation makes difficult to quickly identify visible objects, which isone of our visual system’s goals Even though it would be very interesting to integratetheir visual memory proposals with this thesis Mentioned ideas are used in [Nose98],plus aural and tactile sensors, to make a simple tennis game’s simulation
Kuffner and Noser vision works were the most influential ones during the making of thisthesis
Olivier Renault et al [Rena90] develop a 30x30 pixels synthetic vision for animation, using
the front buffer for a normal rendering, the back buffer for objects’ identification and thez-buffer for distances They propose high level behaviours to go through a corridoravoiding obstacles
No Vision At AllPure Synthetic
Vision
Trang 13Rabie and Terzopoulos implement in [Rabi01] a stereoscopic vision system for artificialfishes’ navigation and obstacle avoidance Similar ideas are used by Tu and Terzopoulos
in [Tute94] to develop fishes behaviours more deeply
Craig W Reynolds explains in [Reyn01] a set of behaviours for autonomous agents’movement, such as ‘pursuit’, ‘evasion’, ‘wall following’, ‘unaligned collision avoidance’,among others This is a very interesting work that should be used as a computer gamesNPC’s complex AI module development guide
Damián Isla discusses in [Isla02] AI potential using synthetic characters
John Laird discusses in [Lair00a] human level AI in computer games He proposes in[Lair01] autonomous synthetic characters design goals Basically his research is focused
on improving artificial intelligence for NPC’s in computer games; an example is theQuakebot [Lair00b] implementation For more research from J Laird refer to [Lair02].Most of other investigations are based on previously mentioned authors’ ideas
Trang 14Problem Statement
We will define our concept of Synthetic Vision as the visual system of a virtual characterwho lives in a 3D virtual world This vision represents what the character sees from theworld, the world part sensed by his eyes Technically speaking, it takes the form of thescene rendered from his point of view
However, in order to have a vision system useful for computer games, it is necessary tofind vision representations with which we can take enough information to makeautonomous decisions
The Pure Synthetic Vision approach is not useful today for computer games, since theamount of information that can be gathered in real-time is very limited: almost onlyshapes recognition and obstacle avoidance could be achieved This is a research field onits own You can read any Robot Vision literature to see all the problems related to it
No Vision At All is what we do not want
So, our task is to find a model that falls some place in between (Figure 2)
Figure 2 Aimed model place.
No Vision At All
Pure Synthetic
Vision
Our Model
Trang 15Synthetic Vision Model
Our goal was to create a special rendering from the character’s point of view, in order toproduce a screen representation in a viewport, so that the AI system could eventuallyundertake:
Obstacle avoidance
Low-level navigation
Fast object recognition
Fast dynamic object detection
We must understand “fast” from a subjective point of view, dependent on results, as “fastenough” to produce an acceptable job in a 3D real time computer game
To reach this goals, we propose a synthetic vision approach that uses two viewports: thefirst one represents static information, and the second one dynamic information
We will assume a 24bits RGB model to represent colour per pixel
In figure 3 there is an example of the normal rendered viewport and the correspondingdesired static viewport
Figure 3 Static viewport (left) obtained from the normal rendered viewport (right).
Static Viewport
The static synthetic vision is mainly useful to identify objects, taking the form of aviewport with false colouring, similar to that described in [Kuff99]
Definitions
We will make the following definitions:
Object An item with 3D shape and mass
Class of Object Grouping of objects with the same properties and nature
In our context, for example, the health power-up located near the fountain on someimaginable game level is an object, whilst all health power-ups of the game level conform
a class of object
Trang 16Each class of object has a colour id associated That is, there exists a function c that
constitutes a mapping from classes of objects to colours This is an inyective functionsince no two different classes of objects have the same colour
c : CO C
The mapping function from classes of objects to colours
co1, co2 CO : c(co1) = c(co2) co1 = co2
The function is inyective
Since CO is a finite set, and C infinite, not all the possible colours are used, and the
function is not suryective, neither biyective
However, we can create a mapping table in order to know which colour corresponds toeach class of object See an example in Table 1
Class of Object Colour
Health Power-Ups (1, 1, 0)Ammo Power-Ups (0, 1, 1)
However, the level geometry as defined may not strictly be an object, because it is usually
made of a peel of polygons that will not necessarily make up a solid object with mass.
Trang 17Despite what has just been said, we will divide level geometry in three classes that will be
part of the CO classes of object set:
Floor : Any polygon from the level geometry with Z normal normalized componentgreater or equal than 0.8
Ceiling : Any polygon from the level geometry with Z normal normalized componentless or equal than -0.8
Wall : Every polygon from the level geometry that does not fit in any of the twoprevious cases
We assume a coordinate system where Z points up, X right, and Y forward (figure 4)
Figure 4 Coordinate System used.
In Table 2, we give an extended mapping table adding level geometry classes of objects
Class of Object Colour
Health Power-Ups (1, 1, 0)Ammo Power-Ups (0, 1, 1)
Near Plane (NP) The near plane of the scene view frustum volume [Watt01]
Far Plane (FP) The far plane of the scene view frustum volume [Watt01]
We can achieve knowledge about position combining the previous variables and the depthinformation of each pixel rendered in the static viewport
When the static viewport is being rendered, it will be using a depth buffer in order toknow if it is necessary to draw a given pixel When the rendering is finished, each pixel onthe static viewport has a corresponding value in the depth buffer
X Z
Y
Trang 18Be dx,y = DepthBuffer[x,y] the depth value of the pixel at coordinates x,y.
to obtain each pixel colour component The dynamic viewport has the same size than thestatic one
R, G, B [0, 1]
Colours Red, Green, and Blue are real numbers between 0 and 1
And given
Vmax +
The maximum velocity allowed in the system, a constant positive real number
If Vx,y is the velocity vector of the object located at coordinates (x, y) at the static viewport,
R(x,y) = min(||Vx,y|| / Vmax, 1)The red colour component is the minimum value between 1 and the velocity magnitude ofthe object located at (x, y) coordinates on the static viewport divided by the maximumvelocity allowed
If D is the direction/vision vector of the agent, we can normalize this and the velocityvector V:
Trang 19c is the dot product or cosinus of the angle between the normalized Vx,y, velocity vector ofthe object located at (x, y) coordinates on the static viewport, and D, the non-playercharacter direction and vision vector The angle ranges from 0 to 180º.
c is a real number between –1 and 1
G(x,y) = c * 0.5 + 0.5The green colour component is a mapping between the cosinus c into the interval [0, 1] Acosinus value of zero will produce a green colour component of 0.5
s = (1 – c2)
s [0, 1]
s is the sinus of the angle between the normalized Vx,y, velocity vector of the object located
at (x, y) coordinates on the static viewport, and D, the non-player character direction andvision vector It is calculated using the cosinus of the same angle
s is a real number between 0 and 1
B(x,y) = sThe blue colour component is a direct mapping between the sinus s into the interval [0,1] A cosinus value of zero will produce a green colour component of 0.5, and a bluecolour component of 1
With this definitions, a fully static object will have a colour of (0.0, 0.5, 1.0) Dynamicobjects will have different colours depending on the movement direction and velocity
Buffers
The three kinds of information types defined so far (static, depth, and dynamic) could bekept in memory buffers for their use Each element of the static and dynamic bufferscontains three values that correspond to each colour component The depth buffercontains a single value The size of each buffer is fixed: screen height times screen width;say, a two dimensional matrix So, given a viewport coordinate (x,y), you can obtain fromthe buffers: object id, its depth and dynamic information
Trang 20Figure 5 From left to right, static viewport, normal rendered viewport, and dynamic viewport.
See Appendix B for details of our synthetic vision implementation over Fly3D [Fly01;Watt01; Watt02]
The expected dynamic viewport is shown together with the static and normal viewports infigure 5
Trang 21AI Module
So as to demonstrate a possible usage of the described synthetic vision, an artificialintelligence module was developed, in an effort to give autonomous behavior to a NPCwithin a FPS game
Due to its simplicity, the developed module could not possibly be used in commercialgames without the addition of new features and behaviors, and refinements onimplemented movements It is basically intended to show how our synthetic visionapproach could be used, without having deeply developed AI techniques
FPS Definition
FPS games main characteristics are:
It takes place through a series of 3D modeled regions (interiors, exteriors, or acombination of both environments) called levels
The player must fulfill different tasks in each level in order to be able to reach thefollowing one That is to say, every level consists of a general goal: reach the next level;that is made of a set of specific goals as: get the keys to open and walk across thedoor, kill level’s boss, etc Also, it is very common to find sub-goals not necessary toachieve the general one, such as “getting (if possible) 100 gold units”
The player can acquire different weapons during its advance in the game In general,
he will have 10 different kinds of weapons at his disposal, most of them with finiteammunition
The player has an energy or health level that as soon as it reaches 0 level, he dies
The player has to face enemies (NPC’s) constantly, who will try to attack and kill him,according to their characteristics, with weapons or in hand-to-hand combat
It is possible that some NPC’s collaborate with the player
The player is able to gather items, called power-ups, that increase health or weaponvalues, or give special powers for a limited time range
Trang 22 The player sees the world as if he were seeing it through the eyes of the character that
he is personifying It est., a first person view
It is recommended that you browse a little over the Internet to become more familiar withthis kind of games Wolfenstein 3D [Wolf01] was one of the pioneers, whereas Unreal[Unre01] and Quake [Quak01] series are more than enough good examples
Next the basic characteristics of the FPS to be implemented with our synthetic visionmodel will be described
Main NPC
The FPS is inhabited by the main character to whom we will give an autonomous life Hisname is Bronto He has two intrinsic properties: Health (H) and Weapon (W) Although hecontains a weapon property, in our implementation he cannot shoot
As an invariant,
H 0, 0 H 100Bronto’s health ranges between 0 and 100, natural numbers Initially,
Hini = 100The first rule that we have is:
H = 0 Bronto diesSomething similar happens with the weapon property: it will follow the same invariant:
W 0, 0 W 100Bronto’s weapon value ranges between 0 and 100, natural numbers Initially,
Wini = 100But, in this case, the only meaning of W = 0 is that Bronto has not ammunition
In addition to the fact that Bronto cannot shoot, he will not be able to receive enemyshoots Given these two simplifications, we have decided that both health and weapondiminish their value during time linearly and discretely That is to say, given:
t + as actual time
tw0 + as the time that weapon starts to diminish from
W0 as weapon value at tw0 time W0 = Wini initially
tdw as the time interval in seconds of weapon decreasing
Dw as weapon decreasing factor
We define that,
t = tw0 W(t) = W0
Trang 23W assumes a present value W0 when the system is at initial counting time tw0.
k 0, tw0 + k tdw t < tw0 + (k+1) tdw W(t) = max(W(tw0 + k tdw) – Dw, 0)
If the system is at a greater time than tw0, W linearly decreases its value according to tdw
intervals of time, until reaching zero and continuing in that value
It is analogous for health:
th0 + as the time that health starts to diminish from
H0 as health value at th0 time H0 = Hini initially
tdh as the time interval in seconds of health decreasing
Dh as health decreasing factor
t = th0 H(t) = H0
k 0, th0 + k tdh t < th0 + (k+1) tdh H(t) = max(H(th0 + k tdh) – Dh, 0)
Figure 6 An example of weapon and energy diminishing system: game starts, then both properties decrease gradually.
Weapon reaches 0 value at 100 seconds, whereas health does the same at 170 seconds At that moment Bronto dies.
Trang 24gradually until it reaches 0 value at 100 seconds; whereas health value reaches 0 value at
170 seconds, in that instant Bronto dies
Power-Ups
Initial health and weapon values as well as the diminishing system have already beendefined It has been specified that when health reaches zero value Bronto dies We have todefine now some way of increasing Bronto’s properties: it will be by means of power-ups.Our FPS will count solely with two power-ups: Health (Energy) and Weapon (Ammunition
or Ammo) When Bronto passes ‘above’ one of them, an event that will increasecorresponding Bronto’s property value in a fixed amount will be executed:
This means that taking a power-up reinitiates the cycle by which Bronto’s property value
Trang 25 A weapon power-up at 37 seconds; its ammunition value will increases from 64 to 74,establishing a time tw0 of 37, from which the weapon value must be discounted every
tdh seconds
Another weapon power-up, but this time at 43 seconds, its value increases from 70 to
80, and establishing again tw0 time, but now at 43
A health power-up at 156 seconds, its energy value increases from 7 to 12, andreestablishes time th0 to 156
Figure 7 Same example as figure 6 but Bronto takes power-ups this time: two ammo power-ups at 37 and 43 seconds,
and one energy power-up at 156 seconds Weapon value reaches 0 level at 123 seconds now, whereas energy at 176
seconds, when Bronto dies.
Bronto’s weapon value will first reach zero at 123 seconds, and he will die at 176 seconds,when energy reaches zero value Situation is graphically represented in figure 7
The whole process is described in figure 8 by means of a state diagram When the game
starts (Start Game event) initial values are established When the game is running (Game
Running state) several events can arise: those that decrease Bronto’s weapon and energy
values, caused when the system reaches a given time, and those that increase Bronto’sweapon and energy values, caused when a power-up is taken The event produced whenBronto reaches a zero value of energy makes the transition to Bronto’s death
Trang 26Figure 8 Events that affect and are affected by Bronto’s health and weapon values during the game.
Bronto Behaviour
Now we will describe the behaviour that Bronto will assume during the game, based onhealth and weapon properties’ values
Being:
Hut , health upper threshold
Hlt , health lower threshold
Wut , weapon upper threshold
Wlt , weapon lower threshold
H and W health and weapon properties’ values, as they where defined in previous
sections
So that:
0 < Hlt < Hut < 100and
0 < Wlt < Wut < 100Bronto will be in any of the following six possible states:
1 Walk Around (WA), when Hut H and Wut W Hut and Wut denote a limit or thresholdwhere if H and W are over them, Bronto have not any specific objective and hisbehaviour is reduced to walk without a fixed path or course
EVENT: Start Game
EVENT: H = 0 SET BRONTO DIES
Trang 272 Looking for Health (LH), when Hlt H < Hut and Wut W When Bronto has enoughammunition, that is over Wut, and starts to feel a need for energy, its value is inbetween Hlt and Hut, his objective is to pickup health power-ups.
3 Looking for Weapon (LW), when Hut H and Wlt W < Wut When Bronto has enoughhealth, that is over Hut, and starts to feel a need for ammo, its value is in between Wlt
and Wut, his objective is to pickup weapon power-ups
4 Looking for Any Power-Up (LHW), when Hlt H < Hut and Wlt W < Wut When Brontofeels a need for both health and weapon, his objective is to pickup any power-up
5 Looking Quickly for Weapon (LQW), when Hlt H and W Wlt When Bronto feels anextreme weapon necessity, caused because its value is under the lower threshold Wlt,his objective is to collect ammo power-ups as soon as possible
6 Looking Quickly for Health (LQH), when H < Hlt When Bronto feels an extreme healthnecessity, caused because its value is under the lower threshold Hlt, it imposes thesearch for collect energy power-ups as soon as possible over any other behaviour,knowing that if his health value reaches zero, he will die
Bronto’s behavior is represented by means of a reduced state diagram in figure 9 Eventhough it is possible to make transitions from one state to any other, only usual andexpected transitions are represented with arrows That is to say, a change from WalkAround to Looking Quickly for Health should be caused by an energy steep diminution,passing from HHut to H<Hlt That transition is near impossible to happen due to theconditions defined for our FPS and if balanced values of Hut y Hlt are used
Figure 9 Bronto’s behaviour reduced states diagram Only expected transitions between states are represented with
arrows, even though it is possible to go from one state to any other.
Behaviour States Solution
H=Hini W=Wini
Trang 28In that viewport we can obtain the information of our interest such as power-upspresence and data for navigation and obstacle avoidance.
Basically, the process consists in a per frame basis analyze the information provided bythe static viewport, depth included, and choose a destination point that corresponds tolevel geometry, specifically floor, in the viewport The chosen coordinate is unprojected inorder to obtain the world coordinates of the destination point Then, a Bezier curve isgenerated between actual Bronto position and the destination The generated curve is thepath to follow
Destination Calculation
Once the destination point has been chosen in (x, y) viewport coordinates, it isunprojected in order to obtain its world coordinates To do that, in addition of the chosencoordinates, the actual model matrix M and projection matrix P (they are both found inevery 3D engine) together with the viewport V and camera angle used to render it areneeded
The process consist in apply the matrices in inverse order to the projection processdescribed in [Eber01]
Bezier Curve Generation
The Bezier curve is generated over the plane parallel to the XY plane and where Bronto isstanding; therefore it is a 2D curve It is not necessary to construct a 3D curve in thiscase because Bronto only has 4 degrees of freedom A 3D curve must be calculated andused when dealing with objects with 6 degrees of freedom, such as space shuttles
Trang 29Figure 10 Bezier curve Control points (Pi ), initial position (B), destination point (T), and Bronto’s visual/direction vector
(D) are represented.
Four control points are needed to draw a Bezier curve of grade 3 First and last controlpoints match with initial and final curve point, the other two control points do not touchthe curve (exceptions when there are successive points co aligned) Refer to [Watt01] formore information about Bezier curves
If Z coordinate is discarded in all cases, being:
B : Actual Bronto’s position in world coordinates
T : Destination point in world coordinates
D : Bronto’s direction/vision vector
PathDistance = || T – B ||, shortest distance between actual position and destination.PathDirection = ( T – B ) / PathDistance, normalized vector between destination andactual position
Control points are established as:
Used variables and curve control points are shown in figure 10
Cf , 0.0 < Cf 100.0Another used parameter is Bronto’s half width, Bbbr In world coordinates it is determined
as the bounding box mid-width Whereas, what is actually used is an estimation ofBronto’s half width measured in synthetic vision viewport pixels
Trang 30Bbbr , 0 < Bbbr < 80Above mentioned parameter becomes important under certain circumstances, such aswhen Bronto tries to go through certain corridors that are narrower than his width.
Other two constants are used to determine viewport margins of non possible destinationpoints:
Bwaupd , 0 < Bwaupd < 80, upper and lateral margins, in pixels, of synthetic visionviewport; and
Bwalpd , 0 < Bwalpd < 80, lower margin, in pixels, of synthetic vision viewport
Figure 11 Static viewport and used Walk Around margins Bolder rectangle outer pixels cannot be chosen as destination
points.
Figure 11 represents mentioned margins inside the viewport
At last, synthetic vision static viewport information is also used, defined as a 160 rows by
120 columns matrix First matrix row (row number 0) corresponds to the lower viewportline Viewport’s information is accessed as:
vps[i][j], i, j 0, where 0 i < 160 and 0 j < 120, being i a column and j a row
If a new destination should be chosen, the heuristic tries to find a rectangle which itslower side matches with static viewport lower line, its width is (2 * Bbbr + 1), its minimumheight is (bwalpd + bwaupd), and its fully coloured as floor Every rectangle that meets theprevious conditions is named free way Look an example in figure 12
Trang 31Figure 12 A free way inside the static viewport White are is floor, grey area is wall Bolded rectangle shows the chosen
free way Within that rectangle are the upper and lower margin lines, and the point that the heuristic will select as the new
destination.
When a free way is found –used strategies will be explained in Walk Around Algorithm
section- rectangle half width coordinate is selected as destination x coordinate, and ycoordinate is determined by height minus upper margin (bwaupd) of the tallest free way
If the heuristic fails to find a free way, it tries to turn randomly left or right If it cannotturn either, no destination point is chosen
Once Bronto has walked down the whole chosen path and no new destination point wasset, he rotates 180º left or right This could happen when the heuristic could not achieve
to find a free way from Cf to the end of current path
Walk Around Algorithm
Walk Around pseudocode together with free way search strategy is presented in thissection Details are explained and examples showed on each step when it is necessary.Input:
vps[160][120] : A matrix with each static viewport’s pixel content Row 0corresponds to viewport bottom line
Bbbr : Bronto’s estimated medium width in viewport’s pixels
Bwaupd : Satisfactory way’s lateral and superior margin in viewport’s pixels
Bwalpd : Satisfactory way’s inferior margin in viewport’s pixels
Cf : Walked path’s percent from which a new destination point must be selected.Output:
A new destination point in (x, y) viewport’s coordinates, if one was found
If at least Cf% of current path has not been walked, return
// Fail flag initialization, fail, in a satisfactory way search.
Trang 32newDestination = true
// freeway is a predicate that tells if there exists a potentially satisfactory free way Look
an example of a free way at figure 12
freeway = x0, yG 0, 0 xo < x1 < 160 ^ (x1 – x0) = (2 * Bbbr) ^ 0 yG < (120 – bwaupd –
bwalpd) ^ ( i, j 0, 0 i < (2 * Bbbr + 1) ^ 0 j < (yG + bwaupd + bwalpd), vps[x0 + i, j] = ‘floor’)// If there not exists any free way, then search is unsatisfactory: it fails
If (freeway = false) then fail = true
// A free way is chose if one of existing ones is satisfactory depending on search strategy, otherwise fail flag is set
If (freeway = true) then
// Strategy 1: If exists, central free way is chose A central free way is represented
in figure 12
If (x0 = 79 – Bbbr) makes the predicate true then that x0 is chose
// Strategy 2: If exists, the right most free way of the viewport left half is chose.This happens where do not exists a central free way and in the viewport’s inferior closestrow containing at least one pixel different than floor, there exists one pixel different thanfloor more close to the left side than to the right side of the searching rectangle Look anexample in figure 13 If precondition is true but then no free way is found in the viewportleft half, the strategy fails Look an example in figure 14
If ( i, j, k 0, 0 < i < j Bbbr ^ 0 k < (bwaupd + bwalpd) ^ ( w, s 0, (79 – Bbbr)
w < (79 + Bbbr) ^ 0 s < k ^ vps[w, s] = ‘floor) ^ ( t, u 0, (79 – Bbbr) t (79 – Bbbr + i)
^ vps[t, k] = ‘floor’ ^ (79 + Bbbr - j) < u (79 + Bbbr) ^ vps[u, k] = ‘floor’) ^ vps[79 + Bbbr - j, k]
‘floor’) then Choose the maximum x0, with x0 < 79, that makes freeway true
// Strategy 3: If exists, the left most free way of the viewport right half is chose.This strategy is a strategy 2 symmetry It is used when do not exists a central free wayand in the viewport’s inferior closest row containing at least one pixel different than floor,there exists one pixel different than floor more close to the right side than to the left side
of the searching rectangle If precondition is true but no free way is found in the viewportright half, the strategy fails
If ( i, j, k 0, 0 < j < i Bbbr ^ 0 k < (bwaupd + bwalpd) ^ ( w, s 0, (79 – Bbbr)
w < (79 + Bbbr) ^ 0 s < k ^ vps[w, s] = ‘floor’) ^ ( t, u 0, (79 – Bbbr) t < (79 – Bbbr + i)
^ vps[t, k] = ‘floor’ ^ (79 + Bbbr - j) u (79 + Bbbr) ^ vps[u, k] = ‘floor’) ^ vps[79 - Bbbr + i,k] ‘floor’) then Choose the minimum x0, with x0 > 79, that makes freeway true
Trang 33Figure 13 Strategy 2 in action In a) precondition is fulfilled in order to employ this strategy: a central free way does not
exists and viewport’s bottom closest row containing a pixel different than floor inside searching rectangle, contains a pixel
different than floor more close to the left side than to the right one, signaled with a double circle Dotted line separates
searching rectangle in halves In b) the free way that the strategy will finally choose as well as new destination point is
showed.
Figure 14 Strategy 2 when fails Precondition is fulfilled in order to employ this strategy However, no free way is found
when the strategy look for one to the left of the point signaled with double circle.
// Strategy 4: Fail is set to true when it is believed that is not possible to goforward due to a thin corridor or a blocking wall It happens when there does not exists a
central free way and in the viewport bottom’s closest row that contains at least one pixel
different than floor, the two pixels closest to the left and right side of the searching
rectangle are at the same distance to the left and right sides A special case is given when
only one pixel is found Look the examples presented in figure 15
Trang 34 ‘floor’) ^ vps[79 - Bbbr + i, k] ‘floor’) then fail = true
Figure 15 Strategy 4 examples In all cases, beyond a free way may exists, search fails if central rectangle fulfills strategy
preconditions In a) two pixels different than floor at the same distance to the left and right sides of the rectangle are
found In b) the same case is produced when a thin corridor is in front of the character In c) only one pixel different from
floor is found at rectangle center In d) there is directly a wall in front.
// If one way resulted satisfactory, static viewport’s destination point (x, y) coordinates
are set
If fail = false then Take the maximum yG that makes true freeway predicate with chose x0;
x = x0 + Bbbr + 1; y = yG + bwalpd
// If no way resulted satisfactory, turn strategies are intended Turn’s heuristics are
explained in following sections Both algorithms (turn left and right) receive as input
coordinates variables x and y as reference, in order to set the corresponding value if a