Monday, June 28, 2010

#11, 3D Graphics, Part 1

This week I'm going to talk about 3D graphics on Mac. Please refer to previous posts on graphics and programming for some background info. What I did in the past few days was (1) testing some sample 3D applications and (2) writing some basic 3D programs. Short version of the result is that Mac hardware and software don't seem to be particularly weaker than PC, in some cases they even do slightly better, but in some other cases especially for real-time rendering using hardware accelerators they may not be as efficient which can be due to less efficient drivers. The basic 3D API on Mac is OpenGL which is not as good as Direct3D on Windows but is pretty much portable, at least for basic stuff.

Before I get to some details, let's make sure we all know the basics of 3D graphics.

==========
Very Very Quick Intro to 3D Graphics
==========

2D computer graphics is all about images, i.e. two-dimensional arrays of points with different colour values (a.k.a pixels or picture elements). 3D computer graphics on the other hand is about points in three-dimensional space. We call these points vertices. Each vertex has three location values (XYZ) and a colour value. But things are more complicated than this for a couple of reasons.

3D points form 3D objects when connected. The simplest type of 3D objects are polygons, i.e. a set of usually 3 or 4 connected vertices that are on the same plane (surface). Polygons are then connected to form a mesh that is the base for complex 3D objects. A mesh can then have a material (properties like colour and reflection) and/or a texture (a 2D image covering the surface of the 3D object). Materials and textures define the colour of vertices and also the colour of points inside the polygons.

Now here's the second complication. 3D scenes have light sources. This means that the colour of a point in 3D is a function of its material/texture and also the light sources.

Finally we have another new concept and that is viewpoint. A 2D image has no notion of viewpoint. Although physically you can look at it from different points and potentially see it differently, this has nothing to do with how your computer is generating the image. 3D scenes on the other hand need to be transformed to a 2D image before they can be shown on screen (let's not talk about real 3D displays). In order to decide what your 3D world will look like, you need to decide where your eyes are.

The process of creating a 3D world is called modelling. Making the objects in this 3D space move is animation. Transforming your 3D world to a 2D image is called rendering, sometimes called shading. Rendering process is controlled by a series of parameters and is usually done in two steps:

** Vertex Shading transforms all your vertices to points in 2D. This depends on the location/scale/orientation of your objects in 3D world and the location/orientation and properties of your camera (e.g. field of view).

** Pixel Shading is the second step and decides what colour to be used for each pixel based on material and texture of the objects and also the light sources.

Rendering used to be (and still can be) done in software. Graphic Accelerators allowed this to be done in specialized hardware with fixed functions. They then evolved into programmable GPU's with their own programming languages called Shaders. High Level Shader Language (HLSL) is the shader language supported by Direct3D. OpenGl Shading Language (GLSL) is an alternative for OpenGL. These languages have their own versions and different graphics cards are compatible with different versions, or Shader Models.

When you watch a 3D animated film each single frame has been rendered separately. When you play a 3D game each single frame is being rendered in real time.

==========
Application Tests
==========

The first 3D application I tested was Maya 2011.

512x512 render using Mental Ray
Windows, Maya 32-bit, 3.48 sec
Windows, Maya 64-bit, 3.08 sec
OSX, Maya 64-bit, 3.02 sec

For comparison, I tried the same file on my desktop PC (Quad Core, 3GHz, 6GM RAM, Performance Index 5.9) and it took 2.07 sec. FYI, the Performance Index for my Windows 7 installed on Mac is 5.2.

1024x768 render of another scene without Mental Ray and using Maya Software
Windows, Maya 64-bit, 0.07 sec
Mac, Maya 64-bit, 0.10 sec

When I used Maya Hardware both had less than 0.01 sec and I couldn't see any difference. I will try longer tests later for better comparison when using Maya Hardware.

Next I tried a 3D game. I chose Half-Life 2 which is now available for Mac. Valve has released Steam for Mac, and some of their games including Orange Box are now available for Mac. One good thing is that if you have the Windows version, you don't need to buy the Mac version and can download it for free. I tested the Windows and Mac version at default settings (1280x800 for Windows and 1152x720 for OSX). I only tried HL2 on OSX for a short period. It seems to be working fine but even at a lower resolution a slight performance problem could be noticed. The rendering seemed a little slower as you could see the screen being refreshed with a little delay. This was not a big issue at all and hardly noticeable but I'm guessing when we get to more complex scenes it could get worse. Needs more testing.

Overall, Mac (or I'd better say OSX since my Windows is running on Mac hardware too) doesn't seem to have a major issue with 3D. My guess is that OpenGL drivers for Mac are not as efficient as Direct3D drivers on Windows. This can be due to slight inferiority of OpenGl itself, drivers, general difficulty of programming for Mac, and popularity of Windows platform that makes programmers work harder and better for it.

The next step of my test was to compare OpenGL on OSX and Windows.

==========
3D Programming
==========

OpenGL is the basic 3D API for Mac. Even other API's like CL are created on top of it. So to test 3D programming it seems that we should focus on OpenGL. For comparison, we can use Direct3D or OpenGL, and I started with OpenGL on both platforms. Windows and Visual Studio don't come with all the libraries for OpenGL programming but they are easily available. This includes OpenGl itself and GLUT (utility functions for OpenGL). There is also GLAUX which is only available for Windows and I won't use. On the other hand, OSX is completely OpenGL-based and Xcode comes with GL and GLUT frameworks. Here are the basic steps to write a simple OpenGL program on OSX:

1- Start a new project of type Command Line tool with C. Unfortunately this is the only non-Cocoa C-based option. Don't worry you can do graphics with it using OpenGL.
2- Right-click on the project name and select Add and then Existing Framework. Select GLUT and OpenGL.
3- Add the following code:

#include
#include
#include

void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);

glBegin(GL_POLYGON);
glVertex2f(-0.5, -0.5);
glVertex2f(-0.5, 0.5);
glVertex2f(0.5, 0.5);
glVertex2f(0.5, -0.5);
glEnd();

glFlush();
}

int main(int argc, char** argv)
{
glutInit(&argc,argv);
glutInitDisplayMode (GLUT_SINGLE | GLUT_RGB);
glutInitWindowSize(500,500);
glutInitWindowPosition(0,0);
glutCreateWindow("simple");
glutDisplayFunc(display);
glutMainLoop();
return 0;
}

Let me explain this code a little bit for those who are not familiar with OpenGL:

** The #include lines allow you to use OpenGL and GLUT. Remember that in C/C++ you need #include for compiler to use libraries. The linker then links your program to actual library code (here the frameworks you added).

** The main() function is the entry point of your C program that initializes OpenGL, creates a window, sets a Display Function and calls OpenGL Main Loop. Display Function draws the graphics on your window. You need to provide a function and pass its name to glutDisplayFunc(). Main Loop processes user commands. If you don't provide your own Main Loop, the default doesn't really do anything special.

** The display() function is what you pass to glutDisplayFunc() and you basically have to have one (it can be called anything). In this simple example, our display function only creates a polygon by defining four vertices in 2D. You notice that OpenGL can be used for 2D graphics as well as 3D. This function will be called every time the window needs to be redrawn.

To run this on Windows, you create a Win32 empty console application, add OpenGL libraries to linker and add a new file with these headers:
#include
#include
#include

You notice that you will need windows.h, and also that OpenGL header files/directories have different names. The rest of the code is the same.

The next step is to add some 3D code. I used a simple Camera class that allows moving camera, I added a Main Loop that allows user to control the camera, and modified the Display Function to create and draw some simple 3D objects. You can find the full code here:

http://www.csit.carleton.ca/~arya/b2cc/opengl/

My Display Function looks like this:

void Display(void)
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); //Don't forget the depth buffer!
glLoadIdentity(); //Load a new modelview matrix -> we can apply new transformations
Camera.Render();
glLightfv(GL_LIGHT0, GL_POSITION, LightPos);
glutSolidSphere(1.0,20,20);
glTranslatef(2.0,0.0,0.0);
glutSolidSphere(1.0,20,20);
glPushMatrix();
glTranslatef(0.0,2.0,0.0);
glutSolidSphere(1.0,20,20);
glPopMatrix();
glTranslatef(0.0,0.0,2.0);
glutSolidSphere(1.0,20,20);
glTranslatef(-4.0,0.0,0.0);
glRotatef(TorusRotated, 0.0,1.0,0.0);
glutSolidTorus(0.3,1.0,8,16);
glFlush(); //Finish rendering
glutSwapBuffers(); //Swap the buffers ->make the result of rendering visible
}

You can see that simple 3D object are being created and then placed in 3D world. Everything is static except a Torus whose rotation value depends on the variable TorusRotated. I used two methods for changing this variable. Look at the main() in the code and locate these lines (we should use only one of them):

glutIdleFunc(Idle);
glutTimerFunc(400,Timer,0);

Both are similar to the one that defined our Display Function but they are defining an Idle function (to be called when the application is doing nothing) and a Timer function (to be called at certain time intervals). Each one of these methods simply changes the TorusRotated variable so the next time the scene is rendered you see the Torus with a different orientation. The timer method is prefered when you want to control how many frames per second you want to have.

For your quick reference here are the Idle and Timer functions:

void Idle(void)
{
TorusRotated += 2.0;
Display();
}
void Timer( int value )
{
TorusRotated += 2.0;
Display();
glutTimerFunc(400,Timer,value); //need to call the timer again
}

It is not very efficient to do actual drawing in Idle and Timer functions but optimization is not an issue here. The code will work with no change on Windows.

For this simple example I didn't notice any particular performance difference for rendering but the Idle and Timer functions don't work the same on OSX and Windows. More specifically, the Idle function on OSX is called a lot more frequently and the Timer function intervals can go as short as 1 millisecond. In Windows you won't get anything less than 10 milliseconds. In Direct3D there are other ways to get higher time resolutions but apparently the OpenGL implementation on Windows doesn't use it and stops at 10 millisecond which is the default for Windows timer.


This week tests don't seem to be very conclusive but hopefully it is a good starting point for understanding how things work and performing more advanced tests, both with existing applications and games, and also developing 3D programs.

I'm off to Barcelona and then Vancouver in a few days, so expect some delays but no worries.

I'LL BE BACK!

No comments:

Post a Comment