Doing Your Own Lighting
by (10 July 2000)



Return to The Archives
Introduction


For the last year or so, I have been modifying/rewriting a renderer designed for making TV qualilty animations. Hopefully it will run at TV frame rates (25 fps) sometime soon. During this programming, I have found all sorts of interesting optimizations, and I may as well try to share them with you. BTW this is my first tutorial, so bear with me if it wobbles around a bit.

For the purposes of this tutorial, I will be assuming that you know what a vector is and how to use one, and I assume you know the basics of how to use lighting, and what ambient, diffuse and specular lighting is.

Please note that I use OpenGL, but this tutorial should be easily applicable to any graphics API.


Lighting Is SLOW


Lighting is probably one of the slowest parts of a rendering pipeline. It is possible to significantly speed up a program by just giving each polygon a flat colour, but this would be exceptionally boring. So what do we do? There are two possible answers that I can think of, and the purpose of this tutorial is to show you how to do one of them. The first answer is to hope that all your users have the new T&L cards that hardware accelerate lighting calculations. If this is the case you are on your own (for this tutorial), because I haven't had a chance to use a T&L card to see just how fast they are. The other way of speeding up lighting is to do it yourself. Unfortunately, most hardware cards on the market today do all the lighting calculations in software. You are also writing software. Therefore, there is no reason for your hand built lighting routine to be slower than the software driver, and by taking advantage of project specific optimisations it is possible to get significantly better performance. For example, my renderer, which is running on a reasonably powerful FireGL1 graphics card, is about 3 times faster when I use my hand optimised lighting calculations to their fullest. So on with the show!


Terms & Conditions


First, here are the data structures and so on that I will assume that you have:

Vertex, Light, & Camera Structures

For the purposes of this tutorial, I will assume the existance of a Vector class with overloaded operators for doing all the basic vector operations, including dot product and so on.

We will be using a few data structures. Here are the important ones...


typedef struct {
	Vector ambient, diffuse, specular, emissive;
	float shininess;
} Colour;

typedef struct { Vector position, normal; Colour colour; Vector finalColour; } Vertex;

typedef struct { Vector postion; Colour colour; } Light;

typedef struct { Vector from, to, up; } Camera;


So we have:
  • A Colour.
  • A Vertex, which has a position in space, a normal, a colour, and a finalColour, which will be the colour of the vertex once we have lit it ourselves.
  • A Light, wich has a colour and a position in space.
  • And a Camera, which has a position in space (from), a point that it is facing towards (to) and a unit vector that defines the up direction of the camera (up).


  • Lighting Algorithm


    Ok, so we have a few shapes with a few vertices, we are looking at it with a camera, and there are some lights in the scene. Now we want to make pretty lighting colours. First I will go through the lighting algorithm and explain how it works, and then I will present some pseudo-C functions for doing various forms of lighting.

    To calculate the lit colour of a particular vertex, we start with the emissive colour of the vertex, which is unaffected by the lights in the scene:

    
      vertex.finalColour = vertex.colour.emissive;
     


    The emissive colour is usually black, but it can be useful to get bright colours in a dimly lit scene. We then loop through each light. For each light we add three things to the final colour of the vertex. The ambient colour is unaffected by the position of the light, so we just multiply the components of the vertices ambient colour by the components of the lights ambient colour :

    
      vertex.finalColour += ComponentMult(vertex.colour.ambient, light.colour.ambient);
     


    Next we need to add the diffuse and specular parts of the lighting. Both of these parts are affected by the position of the light relative to the object being lit, and the specular part is also dependent on the position of the camera.

    The diffuse colour is what I always think of as the actual colour of the object. In the real world, a matte surface reflects light equally in all directions, so it's colour at any point depends only on the position of the light. In a lighting algorithm, this is simulated by taking the dot product of the normal of the vertex and the vector from the light to the vertex. So we have:

    
      Vector lightDir = light.position - vertex.position;
      lightDir.Normalise();
    	
      float diffuseFactor = max(0.0, Dot(vector.normal, lightDir));
      vertex.finalColour += ComponentMult(vertex.colour.diffuse, light.colour.diffuse) * diffuseFactor;
     


    Unfortunately, the lightDir vector does need to be normalised, although the code that I am giving you is more to show you what to do than to give you fully optimised code. The max is taken of the dot product because a negative dot product indicates that we are behind the object, and so no light can reach us.

    The last component of the lighting equation is the specular part. It is easily the most complicated. Specular lighting is the highlight that you can see on shiny objects. For example, grab a shiny apple (If you have one. Impoverished people will have to imagine). You should be able to see a whitish highlight on the surface. Now try moving you head. You will notice that the highlight on the apple moves as you move. This is because specular lighting relies on where the viewer is, as well as the positions of the light and the object. There are several ways to calculate the spceular factor, but the one that I use uses the vector halfway between the light direction and the direction from the camera to the vertex. This is then dotted with the vectors normal, and then raised to the power of the shininess parameter. The shininess paramter controls how shiny an object is (obviously). A small shininess will give a large diffuse highlight, whereas a high shininess parameter willgive a small, sharply focused highlight. The code goes :

    
      Vector halfway = lightDir + (camera.from - verctor.position).Normalise();
      halfway.Normalise();	//Yes this realy is halfway between the two directions
      float temp = max(0.0, Dot(vertex.normal, halfway);
      float specularFactor = pow(temp, vector.colour.shininess);
      vertex.finalColour += ComponentMult(vertex.colour.specular, light.colour.specular) * specularFactor;
     


    However, the pow() command is extremely slow. A much faster, if slightly less accurate version is:

    
      float specularFactor = temp / (vector.colour.shininess - temp*vector.colour.shininess + temp);
     


    Once these have been added up for each light, you clamp the final colour's components to between 0.0 and 1.0, and then when you draw the vertex, you pass the finalColour to OpenGL.


    So What??


    So what you ask? The hardware drivers already do all this, and it is probably optimised to the teeth. Unfortunately, a calculation as slow as lighting can only be optimised so far. A far more effective optimisation is to simply not do the calculation. For example, the emissive, ambient, and diffuse components of a colour are not affected by the position of the camera. This means that if your lights and geometry are stationary, you can avoid most of lighting calculation. One concrete example of this would be buildings. They tend not to be reflective, and they don't move, so we can completely precalculate their colour and save huge amounts of time. If you want dynamic lighting, such as explosions, affecting their colour, then just precalculate the colour from the fixed lights and store it, and then the final colour is this precalculated colour plus light from any temporary lights.

    There is also room for all sorts of other ways of avoiding doing the calculations. I will show you two that I have found, although there are almost certainly more. These two are Spotlight and Occlusion.


    Spotlight


    A spotlight is a light that only sends light in one direction, rather than sending light all over the place. This is simulated by taking the dot product of the vector from the light to the vertex with the direction that the spotlight is pointing, and comparing it to a cutoff value. First we need a new light structure:

    
      typedef struct {
    	Colour colour;
    	Vector position, direction;
    	float cutoff, exponent
      } Spotlight;
     


    The cutoff value is how big and angle the spotlight shines in. It can only be between 0.0 and 90.0. The exponent value affects how much light gets to the edge of the spotlight's area. A low exponent value will give light right out to the edge of the lit area, whereas a high value will give a brightly lit centre that quickly falls off into darkness.

    Once we have the new structure, we add a bit to the lighting code. We add this before the ambient light line, so that if the vertex is outside of the spotlight's area of effect, we don't bother with the rest of the calculation:

    
    float spotFactor;
      if (light is a spotlight) {
    	float temp = Dot(lightDir, light.direction);
    	if (temp >= cos(DegToRad(light.cutoff))) {
    		spotFactor = pow(temp, light.exponent);
    	} else {
    		spotFactor = 0.0;
    	}
      } else {
    	spotFactor = 1.0;
      }

    if (spotFactor > 0.0) { Do the rest of the calculation, but multiply each part by spotFactor before adding it to finalColour }


    Occlusion


    I have to confess that I haven't implemented this one, but the theory is that if you have a fast visibility testing algorithm, you can ask 'is this vertex visible from that light', and probably miss out on a lot of redundant lighting calculations, as well as getting a kind of shadowing routine.


    And More


    There are lots of other potential optimisations, depending on the specific project. One other optimsation that I tried is infinite directional lighting, which removes most of the normalisations from the algorithm. However, my wrists are starting to complain, so I shall leave this one for you to figure out.


    The Code, The Code, Oh The Hideous Code


    Hmmm, I wonder if that is actually a quote from somewhere.

    Anyway, here is my pseudo-C code for lighting vertices :

    
    for (vertex = each vertex in object) {
        vertex.finalColour = vertex.colour.emissive;

    for (light = eachlight in scene) { if (light / vector is not occluded) { if (light is a spotlight) { float temp = Dot(lightDir, light.direction); if (temp >= cos(DegToRad(light.cutoff))) { spotFactor = pow(temp, light.exponent); } else { spotFactor = 0.0; } } else { spotFactor = 1.0; } if (spotFactor > 0.0) { CVector lightDir = light.position - vertex.position; vertex.finalColour += ComponentMult(vertex.colour.ambient, light.colour.ambient) * spotFactor; float diffuseFactor = max(o.o, Dot(vertex.normal, lightDir)); vertex.finalColour += ComponentMult(vertex.colour.diffuse, vertex.colour.diffuse) * diffuseFactor * spotFactor; Vector cameraDir = (camera.from - vector.position).Normalise( ); Vector halfway = lightDir +cameraDir(); halfway.Normalise(); float temp = max(0.0, Dot(vertex.normal, halfway); float specularFactor = pow(temp, vector.colour.shininess); vertex.finalColour += ComponentMult(vertex.colour.specular, light.colour.specular) * specularFactor * spotFactor; } } } vertex.finalColour.Clamp(0.0, 1.0); }

     

    Copyright 1999-2008 (C) FLIPCODE.COM and/or the original content author(s). All rights reserved.
    Please read our Terms, Conditions, and Privacy information.