Jumat, 14 Oktober 2011

Racing Simulation in OpenGL

A Racing Simulation in OpenGL

Joseph Allen

Hiram College

P.O. Box 67

Hiram, OH. 44234

CPSC 401.00

An Independent Research Component in Graphics

Dr. Oberta A. Slotterbeck, Professor

Presented on

November 18, 2002


This project models a car traveling around a closed track. Collision detection and an arcade style physics engine are implemented to provide a “realistic feel.” The car can be controlled by the user and reacts based on the speed it is traveling. Gravity is implemented throughout the model.

This project uses many advanced OpenGL techniques such as lighting, texture mapping, environment mapping, light mapping, multi-texturing, bitmap fonts, blending and use of display lists. In addition, advance mathematics used for collision detection and animation of the car presented interesting challenges.

Many aspects commonly found in other racing and driving simulations are included in this simulation. Aspects such as view changes, a floating camera that trails the car and a multi-level track are all included in the prototype.


Racing Simulation, OpenGL, Graphics, Collision Detection, 3-Dimentional, Track, Car, Model, Velocity, Gravity, Acceleration, Artificial Horizon.


Back in 1991, when the MacWorld Exposition still took place in Boston, summers were filled with the excitement of knowing that some of the latest releases of graphics software and games released for the Apple Macintosh would be available for hands on demonstration.

Some of the most notable memories include the introduction of MacRenderman 3D, Canvas, Ray Dream Designer, and Marathon. These applications pushed the boundaries of what had been previously available at the time and made the MacWorld Expo that much more exciting.

Sadly, the MacWorld Expo is no longer in Boston and no longer offers the excitement and appeal that it used to offer.

One memory, from back in 1991, inspired this project. We remember as only a middle school aged computer nut, visiting the hundreds of small vendor booths at the MacWorld Expo. One in particular, Spectrum Holobyte, (famous for the original release of Falcon Flight Simulator) was showing an interactive demo of their first color game. This game was entitled Vette.

Although this game was not available and could only be pre-ordered at the show, the demo was enough to show that this was a racing game that not only pushed the boundaries, but was like nothing ever seen before.

Vette offered a 3-dimentional environment through the use of polygons. Although this technology had been displayed before, (games such as Atari’s Hard Drivin’ had been available in arcades for several years), this was one of the first attempts to do so on a Macintosh.

In addition, Vette digitized (although very poorly by today’s standards) the entire city of San Francisco. Unlike other racing games at the time, which either offered a closed track or a straight road where the player raced against the track, the player could drive anywhere at any time in the digitized model of San Francisco.

The goal of Vette was to race a Corvette against another Corvette through the city of San Francisco. The revolutionary idea came in the gameplay. There was no track or suggested route. The player simply had to get from one location in San Francisco, to another. This provided a game where the player not only had to master the abilities of controlling the vehicle, but also had to gain knowledge of the geography of San Francisco.

Vette also provided multiple views of the car, multiplayer over a modem and 4 different races (different start and end locations.)

Spectrum Holobyte is no longer in business, (they were bought out over 7 years ago), and Vette has had no additional releases since its debut.

However, inspiration from Vette can be seen in current games such as Sega’s Crazy Taxi and other drive-anywhere type of games such as Rockstar’s Grand Theft Auto III.

The goal of this project was to create an “arcade” 3-dimentional racing game engine using OpenGL. This project will use advanced OpenGL techniques such as lighting, texture mapping and reflection. In addition, a collision detection engine will be put in place.

The final goal of this project is to create a prototype of a car racing around a closed track so that in the future the engine can be expanded to support a go anywhere style of racing with multiplayer support, multiple tracks (i.e. cities) and additional depth that was not available or possible at the time Vette was created.

Background Information:

For the purposes of this paper, it is assumed that the reader has a basic understanding of Computer Graphics and has a basic understanding of the OpenGL API and the C++ programming language. This means that we will assume the reader has knowledge of the basic primitives allowed within OpenGL, vector calculations, normals, transformations (and an understanding of matrix multiplications) as well as the basic camera model in a z-buffer graphics system.

In addition, we will assume the reader has a basic understanding of how a Ray Tracing system works and the concept of what a ray is.

Lastly, we will not discuss any details about how to model an object using 3-dimentional modeling software. Although 3-dimentional modeling software was used in the creation of the project, this paper should not be considered a reference about how to use 3-dimentional modeling software. We will merely mention when and where the software was used, and discusses the implementation of the models within the project.

For those interested in 3-dimentional modeling, additional information can be obtained by researching 3D Studio MAX or visiting http://www.wings3d.com/ for an open source 3-dimentional modeler.

This paper will explain how the racing simulation was created and modeled, the implementation of physics and animation, and the OpenGL techniques used during the construction of the simulation.

Related Work:

Many other driving and racing simulators were examined in preparation for this project. Some examples include: Vette by Spectrum Holobyte, Crazy Taxi by Sega, Grand Theft Auto III by Rockstar Games, Ridge Racer V by Namco, and Viper Racing by Monster Games and Sierra.

In addition, resources include: OpenGL Game Programming by Kevin Hawkins, Dave Astle and Andre LaMothe; Interactive Computer Graphics by Edward Angel; and Physics for Game Development, by Daniel M. Bourg.

Jeff Molofee’s OpenGL Tutorials, available at: http://nehe.gamedev.net/, provided a valuable reference for common OpenGL techniques and code samples.

Description of the Project:

To begin the project, we started by evaluating our goals and setting a framework for what we would like to accomplish. Taking the original goal described above we focused as to what the final package should look like at the time of presentation. This final prototype is defined below:

The goal for this project is to create an “arcade” realistic racing/driving simulation which accurately models a car driving around a track. The car should be realistic looking and have full collision detection throughout the entire experience.

The track should be multi-layered, meaning that it should cross over itself with the use of a bridge providing an interesting challenge for collision detection.

All graphics should make full use of lighting and texture mapping for a realistic effect.

If time permits, a two player mode should be included to display collision detection between cars and provide a “Racing” type of gameplay rather than simply a driving simulation. In addition, if time permits, the interior (dashboard view) will be modeled as well as full physics to model the suspension of the car. (This means the car should absorb shock over bumps.)

Modeling an “arcade” racer is a lot easier of an implementation than a realistic racer. A realistic racer, such as Grand Prix Legends by Papyrus/Sierra, models the physics engine as close to reality as possible.

An example of this realism can be explained through experiences gained while working for Papyrus Design during the summer of 1998. In Grand Prix Legends, the way a car turns and handles in a spin is directly attributed to the speed of the car, the materials of the tire, the amount of air-pressure in the tires, the amount of down force on the car, the density of the air at the current time of the race, and about two dozen other variables associated with the current conditions of the race. When the car spins, a black skid mark isn’t simply drawn with smoke coming from the tires. First, the temperature of the tires is calculated by applying the speed of the car and comparing that to the material of the surface, the material of the tires and all the other mentioned environment variables used to determine the condition of the car. Then, if the temperature is hot enough, grey smoke is drawn. If the tires sustain this temperature for a certain amount of time, also based on the environment variables, then the black skid marks are drawn.

This type of racing simulation provides a very realistic racing experience. In addition, it can also be a very frustrating experience for those who are not race car drivers or racing enthusiasts. The player must pay attention to minor details about the race, the conditions and the engine in the car. Then, appropriate decisions must be made about tuning the car, suspension and tires. Basically, with experience, players learn how to race a car. These type of simulations also provide a model that is very difficult to control at higher speeds where brake distance and brake timing becomes crucial. Hitting a wall by taking a turn at too high a speed can be a deadly mistake, either resulting in irreparable damage to the car or the end of the race for the player. This can be a very frustrating type of interactive experience for a novice or non-race enthusiast.

The opposite end of the spectrum includes the “arcade” racing experience. This type of model is found in most racing simulations for consoles and the arcade. These simulations can range from completely un-realistic to incorporating some realistic physics into the gameplay.

An example of an arcade racer is Ridge Racer V by Namco. Ridge Racer V implements physics and collision detection. As the car increases in speed, handling becomes more difficult. However, it is possible for an experience player to take hairpin turns at speeds greater than 100 miles per hour through use of a power slide as well as complete a windy tightly run track without once using the brake. In addition, hitting a wall simply slows the car and bounces it off back into the race.

Other “arcade” racers such as Cruisin’ USA and CART Fury Championship Racing provide a less realistic model where the player can even use rocket boosts to accelerate by the other player. At the same time there are other, more realistic simulations that would still be considered an “arcade” racer. Sega’s SuperCar GT is a prime example of a more realistic “arcade” racer. This game provides a semi-realistic racing model where the brake must be applied to take tight turns. However, the cars are much easier to control than the real cars at higher speed and hitting walls simply bounces the player back onto the track. (Hitting the walls at too high of a speed will flip the car, but the car is simply reset to the center of the track with little or no damage and no consequences other than loss of the current speed.)

The latter type of “arcade” racer was the type of racing simulation we were trying to implement. Our racing simulation would offer simplistic controls with a feel that anyone who has ever driven an actual car should feel comfortable with.

This simplified our physics engine and allowed for us to concentrate on the graphics and collision response.


Once we had mapped out the goals of the project, we continued by separating the project into phases. The first phase would be research. Since we had taken the graphics course two years ago, time needed to be spent brushing up on OpenGL and re-evaluating our old code. In addition, time needed to be spent learning new functions and techniques within OpenGL.

The second phase would be modeling the car. This would involve first modeling a tire and then the shell of the car. The two would then be combined so that the look of the car and proportions of the car matched our expectations.

The third phase involved modeling the track. This involved all objects within the track as well as the terrain.

The fourth phase included anything needed to complete the prototype, combining the car and the track models (fairly simple once the two were completed), implementing physics and collision detection and any other additional parts and tweaks.

1. Research

We started by purchasing the following books: OpenGL Game Programming by Kevin Hawkins, Dave Astle and Andre LaMothe, and Physics for Game Developers by Daniel M. Bourg.

After reading the first ten chapters of OpenGL Game Programming and the section of automobile physics in Physics for Game Developers, we continued our research by working through the first fifteen tutorials at Jeff Molofee’s OpenGL Tutorials, http://nehe.gamedev.net/.

Through these tutorials, we familiarized ourselves with the necessary functions needed to integrate OpenGL with the Microsoft Windows API. This involved how to set up and OpenGL window, handling Operating System messages and interacting with the user via the keyboard.

We also gained experience with many techniques for rendering effects in OpenGL, examples of blending and texture mapping were explored and modified to see if we could find applications that would fit our model well.

In addition, we explored modifications with the various tutorials. For example, over the course of several days, we modified the waving flag example shown in chapter 8 of “OpenGL Game Programming” so that instead of using a basic sine wave to implement the wave, the intensity of the key pressed (how long the key was held) would be measured and a wave would be applied in a circular pattern from the center of the flag. This produced a liquid or water type of effect on the banner. Although, this effect was not used in the racing simulation, the example provided familiarity and practice with OpenGL code.

Before programming, we made the choice to use the Windows API with OpenGL for this project based on the research we had completed. Despite our familiarity with GLUT, the GL Utility Toolkit, we found that the Windows API offered superior speed when running in Windows and offered a much better implementation of full screen modes. GLUT offers several advantages over the Windows API since it is cross platform capable, allowing code written on glut to run on almost any UNIX distribution and the Mac OS. In addition, GLUT offers superior speed to the Windows API when running on equal hardware under a UNIX distribution. GLUT also offers a full screen mode through the GLUT game mode. However, this command is complicated to work with and yields different results depending on the acceleration hardware and operating system used.

After weighing all considerations and choosing the Windows API, we setup a shell program by using code from Jeff Molofee’s OpenGL Tutorials at http://nehe.gamedev.net/. This provided all functions needed to render an OpenGL scene using the Windows API. It also offered a simplistic keyboard interface through an array of 256 Boolean variables. Each index of the array represented a key with two statuses, true if the key was currently being pressed and false if the key was not currently being pressed.

2. Designing the Car

a. Collecting Textures:

We started to formulate a mental image of how we wanted our model to look by collecting textures that would have a large part in determining how the final model looked. To do so, we took our digital camera and went for a half an hour walk around the back of our apartment.

As we walked, we collected images that would not only be used for the car, but also the track and terrain. This would save time later during the development. We took photographs of the sky, grass, leaves, rock, asphalt, water surfaces, brush, trees and road lines that could all contain sections that could be used as possible textures.

While collecting textures, we were mindful of technique reviewed in a game development magazine over the last year that suggested during the photography of a possible texture, the object not be in direct sunlight. Too much light and the texture offers too much contrast. This focuses the attention on the texture and away from the shape that it is modeling. Too little contrast and the textured polygons appear as a flat shaded. We were attempting to find the balance where our textures would add enough depth to the objects without distracting the user.

We were mindful of contrast while collecting textures, but did not need perfection since much over-contrast could be corrected using image editing software. A sample of the final textures used in the project can be seen in figure 2a.1.

Figure 2a.1 – A Sample of Collected Textures

that were included in the Racing Simulation.

Collecting textures for the car began by first determining what type of car we would model into the simulation. We decided to model our own car, a black 2002 Nissan Sentra SE-R Spec V. This seemed like a good choice because we had unlimited access to the vehicle. In addition, no other known racing simulation featured this vehicle, so it would separate our racing simulation from others. A photograph of the vehicle can be seen in figure 2a.2.

Figure 2a.2 – A Photograph of the Black 2002

Nissan Sentra SE-R Spec V used for the Racing Simulation

We took photographs of the car from about 10 different angles so that these images could be broken up and used as a skin in our racing simulation. We also collected photographs of the wheels and tire treads.

b. Converting the Photographs into Textures:

To convert the photographs into textures, we used image editing software. We first lowered the contrast to eliminate the possibility of the texture adding too much distraction. We then tried to find a small square section that could be scaled smaller and repeated. We tested the images by tiling them across the screen. After manipulating the textures and tiling them several times, we eliminated the visible seam.

The seam in a texture is seen if the edges of the images contrast each other too much when the image is tiled. This is compensated by reducing the contrast of the image and applying various blending and sharpening effects. Once the seam was reduced to an almost non existent, non-noticeable stage, the texture was converted into sizes of 64, 128 or 256 bits.

Our book, “OpenGL Game Programming” stated that texture sizes must be a power of 2 and must be no larger than 256 x 256. We learned later that OpenGL can handle larger sized textures, but it is not a good idea to do so.

To create the texture for the car, we borrowed an idea seen in Viper Racing. Instead of creating a texture for every polygon used in the car, we created one large 256x256 texture containing a flat texture of the car. We then would apply the correct texture coordinates for the 256x256 skin to the car. This creates less textures and a less complicated method to create the texture. The final skin for the car can be seen in figure 2b.1.

Figure 2b.1 – Final Texture skin

used for the car texture.

After studying our chapter on lighting and texturing, we realized that to achieve the lighting effect we wanted on the car, all highlights from the photo needed to be removed. Subtle gray lines were used to signify bumps and doors of the car. This provided a cartoon like look to our car skin that would assist as a background for the reflection effect we wanted to add.

c. Modeling the Car:

To model the car, we used an application included on the CD that came with OpenGL Game Programming, called MilkShape 3D. MilkShape 3D is a low level 3D modeler written in OpenGL. A sample screenshot from MilkShape 3D can be seen in figure 2c.1.

Figure 2c.1 – A screenshot from MilkShape 3D

A figure is modeled in MilkShape 3D by defining vertexes and triangles. The fact that the model consisted completely of triangles made the format very easy to convert for our needs and made MilkShape 3D very appealing for this project.

Once we had finished modeling the car and the tire, we next had to find a way to import the models into our simulation. The final models of the car and tire are shown in figures 2c.2 and 2c.3.

Figures 2c.2 and 2c.3 – Completed models of the car and the tire.

d. Importing the Models into the Racing Simulation:

MilkShape 3D offered many file formats that we could export our models into. After examining many of them, we eventually settled on the RAW format. The RAW format was quite appealing because it was simple. The groupings of objects that we defined in MilkShape 3D were listed and then each vertex was listed for the object in groups of three. This meant that each line contained a single triangle which made parsing the file very easy.

We examined several options on how to use the model within our simulation. We first thought about loading the model into memory at the start of the program. This would allow a very expandable engine where additional cars could easily be added to the simulation. However, at the time we were encountering this issue, we were unable to find a way to calculate the texture coordinates for the car as the model was loaded.

Over the course of this project, we did find a way to implement this by setting up a standard for the design of the model and for the texture. Unfortunately, MilkShape3D did not provide an obvious way to order the vertices and conform to this standard. By the time we figured out how to implement this through developing our own tool, we were well into the creation of the project and ran short on time.

Instead of loading the models during the initialization of the program, we wrote a conversion program that preprocessed the vertices contained in the RAW file and converted them into OpenGL code. In addition, we wrote several versions of our converter that added texture coordinates and normals to the code.

To calculate the normals, we used a cross product function found in chapter 6 of “OpenGL Game Programming” and applied this function to each line (triangle) that we converted.

A sample RAW file and converted code can be found in figure 2d.1 and 2d.2.


0.779318 0.571572 -0.730013 0.325584 0.936452 -0.000787 0.944631 0.601610 0.002399

0.944631 0.601610 0.002399 0.325584 0.936452 -0.000787 0.779318 0.571572 0.715361

Figure 2d.1 – Sample Contents of a RAW file

// Code for Windshield



glNormal3f(-0.142781f, 0.923643f, 0.355665f);

glVertex3f(3.877906f, 10.227757f, 0.488243f);

glVertex3f(-2.587651f, 9.419511f, 1.256520f);

glVertex3f(0.145015f, 9.419511f, 1.818939f);

glNormal3f(-0.106590f, 0.984572f, 0.138770f);

glVertex3f(3.877906f, 10.227757f, 0.488243f);

glVertex3f(-3.587872f, 9.419511f, 0.488244f);

glVertex3f(-2.587651f, 9.419511f, 1.256520f);



Figure 2d.2 – Sample Output from Converter Program

(without texture coordinates)

The converter program asks for a file name from the user and reads the data from the file specified by the user. If the file does not exist, the program terminates. At the conclusion of the run, the output is stored in a text file entitled “codeOut.txt.”

Once the converter program had terminated, we took the OpenGL code stored within “codeOut.txt” and copied it into the racing simulation. We repeated these steps for each of the remaining models.

e. Setting up Texture Coordinates for the Models:

Two-dimensional texture coordinates in OpenGL are set from 0 to 1 on the y and x axis. Specifying a value larger than 1 creates a tiled, repeated texture effect, while setting a value less than one marks a specific portion of the texture. Figure 2e.1 displays how this works.

Figure 2e.1 – A polygon with Texture

coordinates (Obtained from figure 8.4,

page 243 of OpenGL Game Programming)

In order to texture map an object such as the tire, only portions of the square texture map were used. To map the square image as a circle, the center of the circle would be at the texture coordinate (0.5, 0.5). The other two remaining vertexes would be calculated based on the position to the center coordinate. This provided a circular cutout of the square image and allowed us to map the wheel onto the tire.

This type of texture coordinate was applied by adding the calculation to the converter program. In most cases, we were able to write the code in a manner so that the texture coordinates would be configured during the conversion. However, this did not work in all cases, since there was no standard format for our models. In some cases, the texture coordinates had to be manually entered.

This was the case with the top shell of the car (the roof, back window, windshield, hood, truck, front and back). After calculating the coordinates on paper, we input the values into the code and re-compiled to confirm our results. This unfortunately took more time than we had hoped and as stated above, re-confirmed our need for a standard format for car models in the simulation.

f. Integrating the Car into the Simulation and Adding Reflection:

Once all texture coordinates were setup, we placed our code for the car inside the render function defined within our OpenGL/Windows API framework for the racing simulation. Once the wheels were positioned in accordance with the rest of the car, we defined several global variables and added the necessary code so that they could be modified using the arrow keys on our keyboard. This allowed us to spin and turn the tires on the car. We used a third global variable to allow the car to rotate so that the car could be seen in all angles. A screenshot of the rotating car can be seen in figure 2f.1.

Figure 2f.1 – The Texture Mapped Car

Once the car was integrated into the simulation and all issues with the texture maps were corrected, we proceeded to add reflection to the car. There are two methods available in OpenGL for adding reflections. The first is the stencil buffer. This method basically involves rendering the scene twice, once above and once below the surface, then using blending to achieve a reflective look. This is very convincing but is slow and integrates better under a flat surface. The second technique is environment mapping. In this technique a sphere like texture is applied to the object and adjusted as the object is moved. This provides a reflective appearance that fools the user into thinking the object is reflecting its surroundings.

We chose environment mapping for our purposes and used a gray scale cloud texture that was obtained from photos we took of the sky. The car with the environment map applied can be seen in figure 2f.2.

Figure 2f.2 – The environment mapped car.

Once the environment map was properly applied, we used a transparent blending technique to multi-texture the environment map over the skin of the car. This was done by enabling blending and using the blend function GL_SRC_ALPHA for the source and GL_SRC_ALPHA_MINUS_ONE for the destination. This used the alpha value that we had set for drawing the car with the environment texture, 0.23f, and applied that texture transparently over the original texture. Basically, we are drawing the car twice, once with the car skin texture, and once with the environment texture blended over the original model.

One note, we had originally wanted the clouds to reflect over the roof of the car as the car moved forward. Unfortunately the shiny effect of the environment map is only visible as the car turns from side to side and not when it moves forward. This is because the car is always sitting in front of the camera and not actually moving. (Despite using gluLookAt, the camera in OpenGL is always stationary).

3. Designing the Track

a. Generating the Terrain:

Designing the track involved several steps that we needed to pursue. First a terrain needed to be generated. As we stated in the introduction, our track would be multi-level requiring a bridge. In our mind we had envisioned water in some portions of the track. This required some sort of mountainous terrain.

We examined several examples found on the Internet regarding terrain generation. We examined fractals and found that it didn’t suit our purposes because the resulting terrain was far too difficult to control.

We eventually settled on a method found in chapter 8 of OpenGL Game Programming called “Heightfield Terrain Mapping.” This method involved using a small grayscale bitmap file, usually 64x64 to 128x128 pixels in size. The height for the terrain would be based on the value of the bit (0 to 255). Zero (black) would be the lowest point and 255 (white) would be the highest.

This method offered the control we were looking for since we could craft our own terrain using image editing software.

After sketching a rough picture of what we wanted our track to look like (shown in figure 3a.1), we created a 64x64 grayscale bitmap resembling the terrain drawn. This was done by drawing white, grey and black areas in a 64x64 pixel window and applying blending and blur effects in our image editing software. An enlarged picture of the 64x64 grayscale image can be seen in figure 3a.2.

Figure 3a.1 – Initial sketch Figure 3a.2 – Bitmap image created

of the track and terrain to Heightmap the terrain

Once we had created our bitmap, we tested it by substituting it in place of the original bitmap used in the heightfield terrain example in chapter 8 of OpenGL Game Programming.

b. Editing the Terrain and Creating the Track:

To use this for creating the track, we needed to be able to carve a track into the terrain and add our bridge. We did this by exporting the terrain to a raw file. This was done by re-writing the code for the terrain generation so that it output the vertices in the form of triangles to a text file. We were then able to import the text file into MilkShape 3D. An imported 3D rendering of the terrain is pictured in figure 3b.1.

Figure 3b.1 – Converted Terrain after it

had been imported into MilkShape 3D

Once the model was loaded into MilkShape 3D, we positioned it so that part of the model lay in the negative y-axis. Since we were going to draw our water at a 0 height on the y-axis, this provided us a way to control where the water would be drawn over the terrain. This control, enabled creation of the river.

We then took the terrain and carved a track and a tunnel, as well as modeled a bridge. (Note: The bridge used in the Racing Simulation is modeled after the newly constructed Leonard P. Zakim Bunker Hill Bridge in Boston, MA.) In addition, the areas of the terrain were smoothed and carved, and a waterfall was added to feed the river. The completed terrain is shown in figure 3b.2.

Figure 3b.2 – The completed terrain with

bridge, waterfall, track and tunnel.

c. Importing the Completed Track into the

Racing Simulation and Adding Textures:

The track was converted into OpenGL code using the same converter program we had used with the car models. However the texture information in the converter program was adjusted for purposes of the model. For the terrain, the model was divided into many different pieces within MilkShape 3D. Each of these pieces represented a different texture. This made our life easy since the sections of the track would be divided during the conversion making the areas in the code where we would need to change textures easy to find.

We adjusted the code of the converter so that the texture coordinates for each polygon would be repeated based on the size of the polygon. So using the vertexes of each polygon, the converter would calculate the height and width of each polygon and apply the texture in the fashion that 5.0 world coordinates equals one texture coordinate. This created a consistent look to our textures when applying them to polygons which widely varied in sizes.

Unfortunately, as with the car, we found certain coordinates too difficult to calculate and needed to calculate those by hand. These were the polygons for the track itself (the actual surface of the track), the waterfall, the tunnel walls and the tunnel ceiling.

Once all textures had been completed, we placed the code for the track inside our render function. Next we needed to devise a method to move around the world so that we could examine the track.

This was done using the gluLookAt function. We kept global variables for the camera coordinates and the look at coordinates. In addition, we kept an angle variable so we could track which direction we were facing. We used concepts borrowed from Jeff Molofee’s OpenGL Tutorial # 10 by Lionel Brits. This tutorial showed a 3D room that one could walk around in.

We borrowed the concept used to turn the camera, (xpos += (float)sin( heading * piover180) * 0.05f; and zpos += (float)cos( heading * piover180) * 0.05f;) so that we could move in the direction we were facing. In addition, we set the “U” and “D” keys to move the camera and look at coordinates up and down the y-axis.

This allowed us to move around the model and view the terrain to examine our textures. After examining our terrain, we found that some of our textures had very visible seams. We corrected this in our image editing software until we were satisfied with the results.

The next step was to add lighting to the model. Our model would use one positional light source used as the Sun. We positioned this light, GL_LIGHT_0, at the coordinate position (11, 100, 80). We then enabled lighting and set material settings for every texture in the model. The material settings were placed just before or after the texture declaration in the code. This proved to be a rather simple step and took less than 20 minutes to implement.

Moving on, we added the water to the model. The concept for the water was borrowed from the heightfield terrain example in chapter 8 of OpenGL Game Programming. The water is drawn at an initial height on the y-axis. For the purposes of the racing simulation, we chose zero. A large textured GL_QUAD is drawn along the x and z axis. In our model, the size of the quad was 1000 units by 1000 units with its center positioned at the origin.

We increased the draw distance for the viewport of the model to 350.0f units deep. This allowed the entire terrain to be drawn from any position on the track. Next, we applied the same blended environment to the water as we had with the car. This provided a sheen to the water surface as the model was rotated and translated.

To animate the water, we borrowed another technique from the heightfield terrain example in chapter 8 of OpenGL Game Programming. We set a limit using a constant float that the water could move a distance of plus or minus 0.25 units. We also set constants to determine the speed at which the water moved up and down. When a limit was reached, the water direction would reverse. This simulated the water moving up and down as noticeable at the edge of a lake.

Adding the waterfall involved using a transparent blending effect so that the user could see through the waterfall. We then used a rotation of 3 textures that were created from old photos we had of Niagara Falls. These textures were sent to the function to draw the waterfall in a rotational pattern which simulated the motion of the waterfall.

Lastly, the sky was drawn in over the horizon. To do this, we first pushed the modelview matrix by calling the glPushMatrix() function. This saved the modelview matrix so we could revisit it by calling glPopMatrix().

To place the sky directly behind the camera, we loaded the identity matrix. As we mentioned earlier in this paper, OpenGL does not allow movement of the camera. The gluLookAt function simply applies transformations to the modelview matrix so that it appears that the camera has been moved. By loading the modelview matrix, we can assure that the origin is directly at the center of the screen and that we are looking down the negative z axis. This allows us to draw the sky so that it is always positioned behind the model and directly in front of the camera.

Next, we translated the height of the camera, translated back 300 units (to place the sky behind the model), and rotated the sky based on the angle between the camera coordinate and the look at coordinate. (We will discuss this more in the next section of the paper since the technique needed to b modified slightly when we changed the camera motion to trail the car motion). After all transformations were completed, the sky was drawn as a large textured GL_QUAD of a size of 600x600 units. We used a texture generated from a photograph we had taken of the sky during a nice sunny day. The modelview matrix was then popped so that we returned to our original matrix and any other objects would be drawn in relation to our gluLookAt transformation.

One difficulty with drawing the sky was our blending function with the water. Since the water was drawn translucently and the sky was drawn behind the water, the sky seemed to draw over the water when looking out away from the terrain. Because of this, we added the translation of the height of the camera and the rotation to compensate for the angle of the camera. This created an artificial horizon where the water met the sky. A screenshot of the completed terrain and track is shown in figure 3c.1.

Figure 3c.1 – The completed textured Track

4. Combination of the Car and Track; All Finishing Touches:

a. Integrating the Car into the Track and setting up the Camera Movement:

Integrating the car model with the track proved to be a very simplistic operation of drawing the car in scale with the track. We had taken all of our models and split up the code into separate files as we had created them. We restructured the code so that functions existed to create display lists for each model. This allowed the models to be loaded into memory so that they could be drawn more rapidly and increase the frame rate.

This was simple, although time consuming. The benefit of speed gained by using display lists was immediately noticeable upon our next build. We also separated our global variables into a file entitled “globals.h.”

Once we had our program restructured, we placed the car in front of the camera by loading the identity matrix and then drawing the car. Although this was a simple way of drawing the car, we realized quickly that our camera movement would have to be redesigned.

The current way the model was operating movement was based on the camera position. The look position was then generated from the current position of the camera. Although the camera was positioned higher then the look at coordinate, it was only due to a scalar called camera_height that was added to the camera y-coordinate value when the gluLookAt function was called.

The problem with this was that turning was centered on the camera’s position. This works correctly if we were modeling a first person shooting type of movement. The person walking could turn left or right and move forward and back. However, since we were modeling a car, and modeling the car from an outside view, turns had to be centered on the car’s position with the camera trailing that position.

This meant that our entire movement engine had to be re-written. We encountered this problem with our movement engine at the same time we were exploring collision detection. (Collision detection will be explained in the next section of this paper). So, we were already familiar with the CVector class included in chapter 19 of OpenGL Game Programming.

The CVector class allowed 3 coordinates to be represented as a single object in C++. Although this is a very simple class to write, the CVector class included all necessary mathematical functions needed to perform vector calculations. Due to this and time constraints, we included this class in our project.

After we redefined our camera and look coordinates as CVectors, we continued to re-write our movement engine. Rather than loading the identity matrix to position the car in front of the camera, we positioned the car at the look vector. This made more sense since the camera should always follow the car. In addition, this provided us with a way to find the cars current location on the track.

Moving the car forward became a little tricky. After sitting with a pen and paper for several hours, we devised a way to implement this. We used two additional global variables. One was a float variable to keep track of the angle the car would be turned, the second was a CVector placed two units in front of the car. The idea was that this point would mark the direction the car was facing and that movement could occur along the vector defined between the look coordinate and this coordinate 2 units in front of the car.

We called this invisible point front_of_car. To move along this vector, we would subtract the look vector from the current value of the front_of_car vector. This would position the front of car vector at a 2 unit position with the look vector at the origin. We then would take the unit vector of the difference between front_of_car and the look vector, and multiply it by a scalar of the current speed. We would then add the result to the current value of the look vector and the front_of_car vector which moves the car along the vector defined by the two at the current speed of the car. For this to work correctly, the front_of_car vector and the look vector must always have the same y-coordinate value. To accomplish this, the front_of_car vector is initialized at 2 units away from the initial angle of the car. Since the scalar is applied to the difference of the two vectors, they always remain on the same plane in the y-axis.

The camera is then setup in a chase type of situation by taking the difference between the camera vector and the look vector, translating the camera vector so that the look vector is at the origin. We then take the unit vector of the camera vector and multiply this by a scalar of camera_distance, defined in the “globals.h” file. This is then translated back to its position in the model by adding this resulting vector to the look vector and setting it as the new value of the camera vector.

This enables the camera to follow the look vector as it is moved around the track. Over the course of several movements, the vectors defined between the camera vector, look vector and front_of_car vector will align themselves and the car drives forward in a straight line. If the car drives in reverse, the camera will swing around to the front and chase the car from the front. This always gives the user a view of where they are headed.

An illustration defining the movement of the car and camera follow is shown in figure 4a.1.

Figure 4a.1 – These illustrations show the relation

of the camera, look and front_of_car vectors before

and after the car is moved along the vector defined

by the look vector and the front_of_car vector.

As the car moves, we are constantly applying gravity. Gravity is defined by a constant GRAVITY in the “globals.h” file. Each time we draw the car, we pull the look and front_of_car vector downward by a scalar of gravity. The thinking is that once collision detection is completed, the car will be held on the track.

Turning the car also proved to be a challenge and was also figured out by pen and paper. To turn the car, we keep track of a global variable called carTurn. This variable keeps track of the current orientation of the car in degrees. By defining a constant rate of turn, we multiplied this rate by the current speed of the car. This ways the car is unable to turn if at a stop. We later scaled the car so at speeds less than 20 units, the car would have a larger rate of turn which would slowly taper as the car reached the speed of 20 units. This allowed the user to be able to complete a 3-point turn which was previously impossible with the first implementation. (Since this is an “arcade” style physics engine, we are aim with something that responds as the user should expect).

We turned the car by adding the angle that the car was to turn to the carTurn variable and converting that value to radians. We then took the front_of_car vector and translated it so that the look vector was at the origin. This was again done by subtracting look from front_of_car. We stored the original length of this vector, (in this case 2 units) and converted the vector to a unit vector. We then calculated its new position as a unit vector using sine and cosine (sin x, cos z), and multiplied the result by a scalar of the original length of the vector. This positioned the front_of_car vector in accordance to the new angle with respect to the look vector. An illustration demonstrating how the car is turned can be found in figure 4a.2.

Figure 4a.2 – These illustrations show the vector calculations

involved when turning the car.

Once we had implemented the car movement and turning, we set the “A” and “Z” keys to alter the camera_height global variable. Based on the angle between the camera vector and the camera vector with its y-coordinate value scaled by the camera height with both vectors translated so that the look vector is at the origin, we calculate the angle to rotate the sky, after the translation of -300 on the z-axis has already taken place. This moves the sky upward as the angle between the camera and the look vector increase; while simulating the artificial horizon.

Special cases are included if the camera y-coordinate value drops below the look y axis value to prevent the sky from flickering.

b. Collision Detection:

Now that we could move the car and the camera would follow, we needed to find a way to keep the car on the track. We separated collision detection into two sections, the track surface and the track walls. At this point, due to time constraints and complexity, we decided to limit the car to only driving on the track and not the go-anywhere off-road style interaction we had initial set as a goal.

To calculate a collision, we first needed to be able to define boundaries on the track. Since we had used our converter software to convert our models into OpenGL code, we had no way of determining where our polygons existed with the way the program had been currently written.

In order to counter this, we created another model in MilkShape 3D that consisted of the track surface and a set of walls around the track. We output this to a RAW file and would use this file as a guide.

The next step was to design a structure that could hold this data. After researching collision detection in chapter 19 of OpenGL Game Programming, we learned that calculating the distance to a plane is necessary when calculating collision. Because of this, we included the CPlane class that was shown in chapter 19 of OpenGL Game Programming. This provided an object for storing a plane which contained the plane normal and the offset. In addition, any calculation needed to calculate vectors against a plane was included in the CPlane class.

We created a class that contained two linked structures, one for the walls and one for the surface of the track. We called this class “Track_Boundarys” and defined all functions for collision detection in this class.

Each node within the linked structure contained an array of 3 CVectors (the 3 vertices of each triangle), and a CPlane (for defining a plane based on the vertexes of the triangle). The collision data for the track would be loaded into this structure at the initialization of the program from the RAW file generated from MilkShape 3D.

To calculate collisions with the surface of the track, we checked to see if the current look vector had crossed the plane of a polygon in the surface linked structure by less than 7.5 units. If it had, we scaled the point to the surface by adding its distance to the surface to the y-coordinate value of the point. We then checked to see if this point lay within the polygon defining the plane in our linked structure.

This was done using the PointInPolygon function found in chapter 19 of OpenGL Game Programming. The function worked by calculating all of the angles between the point and each of the vertexes. If the angles all added to 360 degrees, within a set tolerance defined in “track.h,” the point was considered in the polygon and we had a collision. An illustration explaining the PointInPolygon function can be found in figure 4b.1. Otherwise, we would look at the next plane in the surface linked structure. If no collisions had been detected (we would reach a NULL pointer in the linked structure), the look vector would be left as is, otherwise, the y-coordinate value of the look position would be moved so that the tires of the car rest on the surface of the track.

Figure 4b.1 – A point is considered inside a polygon if

all of the vertex angles sum to 360 degrees

If a collision with the surface is made, the pitch and roll of the car is calculated by finding the position of the collisions with all four tires. Constant CVectors are defined for the position of all 4 tires with the look vector positioned at the origin. We wrote a function called “Rotate_Y_axis” which was added to the CVector class. This allowed these CVectors to be rotated around the y-axis and positioned in accordance with the current yaw of the car.

Once the position of all four tires was found, the angles between the front and rear tires and left and right tires was found. These values or the maximum pitch and roll values if the values were too high were taken and applied as rotations when the car was drawn. This made the car appear as though it sits on the track.

These four tire points were also used to calculate collisions with the walls. If a tire point had crossed the surface of a plane defined in the wall linked structure while moving forward, we deemed if it was first inside the polygon defined on that plane, if it was and we had a collision, the reflection of the vector defined by the original point before moving forward and the new point after the movement forward was calculated using functions in the CVector class. If the angle of the impact was less than 45 degrees, the car would be reflected at 75% of its original speed along the reflected vector. Otherwise, the car would be sent at negative 25% of the original speed along its original vector. This would send the car backwards at 25% of its current speed and simulate a head-on collision.

If no collision was found, we would continue by checking each tire position for a collision with a wall within a nested if statement. Once a collision is found, we are done checking and break from the if statement. We assume that only one tire can collide with a single wall at a time. If no collision is detected, the car is moved at its current velocity along the vector defined between the front_of_car vector and the look vector.

To test our collision detection, we drew the triangles defined in our linked structures using lines. When a collision took place, we drew the polygon where the collision was found as a solid blue triangle. We drew the point where the collision occurred in the polygon with a green sphere. A screen shot of this scaffolding is shown in figure 4b.2.

Figure 4b.2 – Scaffolding showing polygon and

point where collision occurred.

Several notes about collision detection include that the function for calculating the dot product in the CVector class was written incorrectly and had to be corrected before collision detection would function properly. In addition, the RayIntersection function within the CPlane class was intended to return a point that lie on a ray defined by two vectors where that point intersected a plane. However, the function always returned a point that lie on the plane, but not always on the ray defined by the two vectors. This resulted in false collisions with walls. To compensate for this, the tolerance for the PointInPolygon function was raised so that the actual point that was at a negative distance to the plane could be passed rather than the very precise intersection point on the ray.

Due to this, the wall collisions are not completely accurate and the car is able to slowly drive through the wall if the user simply collides with a wall and accelerates.

c. Finalizing the Racing Simulation

Once we had completed the collision detection completed, removed our scaffolding and began to add the last few pieces to the Racing Simulation. We added a spoiler to the car model, since in the photo the car has a spoiler. This was done by modeling the spoiler in MilkShape 3D and adding the code to the display list for the car.

We then added brake lights for the car by modeling three collections of triangles sitting slightly off of the brake lights drawn in the texture for the car. When the global Boolean variable for the brake lights is true, the brake lights are blended to a translucent red and anti-aliased by offsetting the polygons slightly and re-drawing in a different red shade. This gives the brake lights a soft outer glow.

A help screen and welcome screen was added and toggles for the help screen were implemented by using the “H” key on the keyboard. This was done using example of bitmap fonts found in chapter 11 of OpenGL Game Programming. We first blended a semi-translucent gray quad over the current screen while pausing the movement of the models. We then drew our text over this quad. Another overlay such as the help screen was added when the user presses the escape button. Instead of simply exiting, the program asks for confirmation of the form of “Y” for yes and “N” for no. This multiple function for the escape key was implemented through the use of global Boolean variables and a series of if statements.

We also added several views to the car when the user pressed the numbers 1 through 4 on the keyboard. This was implemented by simply adjusting the camera_height and camera_distance variables. In the case of view 1, which is the in the car view, we use a global Boolean variable and an if statement so that the model of the car is not drawn. We also lock the “A” and “Z” keys so that the camera height cannot be adjusted while in this view. Unfortunately due to time constraints, we did not fix the camera swivel problem when driving in reverse in this view.

Other added features included multiple screen resolutions toggled via F1 through F3 and the ability to toggle different texture filters via F7 through F9.

We decided to leave in our failed attempt at frame skipping. During the construction of this simulation, we included a global variable called “machine_speed.” This variable would be applied as a scalar to all movement in the simulation. If machine_speed is set to 1, that means the current machine is the same speed as the machine we developed the machine on. A value less than one slows the simulation and a value greater than 1 speeds the simulation.

Using an animating loading screen of environment and multi-textured rotating cubes, this screen would be displayed for five seconds while the amount of frames produced is counted. (Each time the loop is entered constitutes one frame). After five seconds, the amount of frames is divided into the amount of frames produced by the development machine (2332) which produces the machine_speed scalar for the current hardware the simulation is running on. However, some graphics hardware can display these cubes out of proportion to the actual simulation. This resulted in scalars that were too high or too low for the machine the simulation is running on.

Due to this inconsistency, the feature is reduced to a toggle via the “T” key on the keyboard. In addition, the machine_speed scalar can be manually adjusted using the plus and minus keys on the keyboard.

The last part added to the simulation was a banner airplane. We modeled this airplane in MilkShape 3D and converted the model to code in the same fashion as all previous models included in the simulation. We incorporated the environment map to give the windshield of the airplane a reflective look.

Incorporating the waving flag example that we had worked with while researching this project and from reading chapter 8 in OpenGL Game Programming, we modified this code so that we could use an elongated Hiram College texture that we had created for the banner. We use a sine wave to generate the waving effect and use a counter to slow the banner speed. This way the sine wave is only shifter after a certain amount of frames.

When rendering the banner, we disabled the GL_CULL_FACE feature, so both sides of the banner were drawn. Once we had the airplane model incorporated into the model, we used what we had learned from implementing the camera follow to make the airplane fly.

The airplanes path is defined by 12 destination points each determined by the current phase of the airplane. When the plane sequence is started, a Boolean value is marked true so that the plane cannot be reset while flying. The initial starting point is set and the phase is set to 0. The airplane follows a CVector defining the direction the airplane is heading called plane_heading. This is the same concept used for the look vector and the front_of_car vector. This plane_heading CVector is set to a tolerance, defined in the “globals.h” file, which defines the distance between this point and the current position of the airplane. The plane_heading vector is used to determine the airplanes direction by moving from point to point as the airplane follows from behind. By using the tolerance, the plane follows at a pre-defined distance. By raising this distance, the plane appears to turn smoother due to the angles between the two vectors. Lower the tolerance, and the airplane follows closer to the plane_heading vector and turns appear very jerky.

Once the current airplane position is determined, the pitch and roll of the plane is determined by the angles formed between the heading vector and the current plane position. The plane sequence can be started by either pressing the “P” key on the keyboard or driving the car under the waterfall.

A screenshot of the completed Racing Simulation and the Hiram College banner plane is shown in figure 4c.1.

Figure 4c.1 – Screenshot of the Completed

Racing Simulation and the Hiram College Banner Plane

Evaluation of Results

After reviewing the final results of this project, we can safely say that most of the goals initially set for the racing simulation were met. We have produced an “arcade” realistic racing simulation with a physics engine and collision detection engine. The look of the texture mapped terrain and the texture mapped car is adequate for what our expectations would have been.

The goals we did not achieve were the implementation of a collision detection engine that allows a drive-anywhere, off-road experience and animated modeling of the cars suspension. Due to time constraints, the rendering of the car cockpit as well as multi-player capability so the car could actually be raced were also omitted.

In addition, this project was intended to be a prototype using a racing simulation engine. Unfortunately due to the use of our converter software and no set standard for the car and track models, additional cars and tracks cannot be loaded and added to the simulation.

However, car movement and surface collision were implemented as hoped and would be able to be reused to implement the racing simulation engine with the addition of better wall collisions and a standard format for car and track models.

Overall, this project is considered a success. Despite the few short comings listed above, we have produced a very playable prototype that demonstrates many of the features available in OpenGL. The fact that these goals were not accomplished does not mire the success of the prototype; it may be that too many goals were set for a project that required completion in such a minimal amount of time.

Future Research

At this point, we will not continue work on this Racing Simulation. That does not mean that we will not re-visit the design of the racing simulation engine described in the introduction.

Using what we have learned from this experience, we would like to re-write the structure for the racing simulation engine so that a set model format exists for track and car objects. Perhaps writing our own tools for the creation of the car models and track models would assist. We could still use our surface collision engine as well as our movement engine which worked as we had hoped.

Other improvements would be a computer opponent/opponents, multiplayer support, joystick support, sound, better collision detection and a cockpit view. Suspension tuning and suspension animation would be something we would like to pursue as well as a gearing system so that acceleration takes place like a car with a transmission would. Typical HUD displays such as speed, a map, a tachometer and other data would need to be added as well as statistics such as time to track the race and turn the simulation into more of a game.

We would also like to explore a real time damage model for when the car collides with other objects as well as particle systems for explosions, weather models and other effects.

Several effects that were omitted from this project due to time constraints include skid marks, smoke and different track material for a different ride.

Lastly, we would have liked to integrate a high resolution timer to maintain or constrain the model to a particular frame rate. Again, this feature was omitted due to time constraints of the project.

Sources and References

Angel, E. (2000). Interactive Computer Graphics, a top-down

approach with OpenGL. New York: Addison-Wesley

Astle, D., Hawkins, K., & LaMothe, A. (Ed.) (2001). OpenGL

Game Programming. Roseville: Prima Tech

Bourg, D. M. (2002). Physics for Game Developers.

Sebastopol: O’Reilly & Associates

Molofee, J. (Accessed from September 2002 through November 2002).

Jeff Molofee’s OpenGL Windows Tutorial, http://nehe.gamedev.net/

1 komentar:

Posting Komentar


Twitter Delicious Facebook Digg Stumbleupon Favorites More