Building a 3D Portal Engine - Issue 08 - Polygon Filling by (19 February 1999) |
Return to The Archives |
Introduction
|
This is the last article about the basics of 3D coding for a while. After this one, we'll dive straight in the essentials of Portal rendering, starting with the most simplistic forms, and advancing to the better ones while on the way exploring the opportunities that they bring. But before we get to that, there's one last issue that has to be addressed. You can do wire polygons. That's easy. But how are those incredible cool perspective correct bilinear filtered bumpmapped tilable true-color polygons done in real-time? Well, it's actually not too hard. There are a few things to explain, and let's get this article structured this time: So, to start with the first one: Tracing outlines. No matter how your final polygon should look, this is how to start. All polygon fillers that I ever saw use this approach. To fill a polygon, you draw horizontal (or in some cases, vertical) lines, from left to right, top to bottom, until you have drawn the entire polygon. So, you have to start with finding out what the start points and end points of these lines are. That's what I call 'outline tracing' or 'edge tracing'. It's done like this: First define two arrays, with as many elements as there are lines on your screen. The first array is called 'x_start', the second 'x_end'. Next, find out wich lines your polygon covers. This can be done by finding out what the minimum and maximum y coordinates of any of the vertices of your polygon are. Then, for this range, initialize the arrays with impossible values. For example, x_start could be set to '10000', x_end to '-10000'. Finally, process the edges. Interpolate from the start of the edge to the end, skipping one screen line per iteration. Compare the x value of each line the edge covers with the minimum and maximum values that you keep in the arrays, and replace those values if neccessary. Here's some pseudo-code:
At the end of this process, x_start and x_end contain exactly what you need: The contours of the polygon. Note that there is another way to trace the outlines of triangles; in that case it's more efficient to skip the arrays and trace the left and right side of the polygon simultaneously. I will not explain this technique, since I hardly ever use it. You could add it as a special case to your 3D engine, for polygons that are (still) triangles after clipping. OK, next important thing for polygon filling is INTERPOLATION. You have already seen some interpolation in the pseudo code, where the x position of the edge is interpolated from the minimum y-value to the maximum y-value. Filling the lines from left to right with a single color requires no further interpolation of course, but as soon as you want to do even a little bit more, you'll need a big dosis of interpolation. Imagine that you assigned a texture to your polygon. Now each vertex has not only an 'x' and 'y' position, but also an 'U' and 'V' value. In this case, you need four extra arrays: u_start, u_end, v_start and v_end. During the edge tracing, you should now not only remember the x-position per screen line, but also the U-value at the start and end of each line, plus the V-value. The U and V delta's can be calculated the same way as the x_delta in the above pseudo code too. Once you have your arrays filled with x, U and V values, you want to draw the spans. And here's the interpolation again: Just like a loop that draws from x_start to x_end for a given line interpolates from x_start to x_end, you should interpolate u_start to u_end, and v_start to v_end. So, you need a delta-U and delta-V PER LINE. It's calculated as follows:
So, when drawing pixel 10 from a span, you should draw the texel at the U/V location that can be determined as follows:
Of course, just like in the pseudo code for the edge tracer, you need to decide wheter you want to use floating point or fixed point for your delta's, since integer values are clearly not good for storing delta's that are smaller than 1. Every polygon filler is based on these principles. With this knowledge in mind, let's explore some techniques that you'll see pretty often in the feature-lists of 3D engines: Gouraud Shading: This is the name for interpolating color values over a polygon. Each vertex is assigned an intensity or RGB color, wich is then first interpolated over the edges, just like the texture in the above example, and then over the lines, again, just like the texture. You can either interpolate a single value ('intensity'), to get a single light color, or interpolate red, green and blue separately. (Fake) Phong Shading: For this technique a texture, containing a picture of a light spot, is projected over the polygon. Each vertex of the polygon now not only has a U/V for it's own texture, but also a U/V for the phong-map. The U and V's for the phong map are calculated using the normal vectors of the vertices. This is actually a very tricked technique, since vertices don't have normal vectors: For this technique, each vertex is assigned a normal vector by averaging the normal vectors of the polygons that it connects. Since normal vectors are of the form (x, y, z), where x, y and z are values between -1 and 1, fake-phong shading uses any two of these values as U and V in the phong map. This results in fluent flow of 'light' over the polygon when the normals of the polygons in an object alter, for example when the object rotates. Big advantage of this technique over gouraud shading is that you can have a light spot in the middle of a polygon, wich is impossible when you interpolate color values over the polygon. Besides this, you can draw the light spot yourself, so it can be as sharp or as soft as you want, or you can even use a normal picture, for example, the environment of the object. This last example is also called 'environment mapping'. Bump Mapping: This is a bit too complex to go in too much detail, but I'll try anyway: For bump mapping you need an extra texture, called the bumpmap. This texture represents the relief on the polygon. The goal of any bump mapping algorithm is to somehow get a relation between the light intensity for individual pixels, and the relief on the polygon. You also want to take into account the direction of the light of course, so the polygon doesn't look the same from all directions. In my Focus engine, I implemented a really simple bump mapper. It's based on the fake phong shading I just explained. The bumpmap is first processed to get it in a special format; the first four bits of each pixel in the bumpmap represent the 'slope' of a pixel in vertical direction, the second four bits the slope in horizontal direction. When fetching pixels from the phong map, I add these values to the U and V that I would normally use to fetch a pixel from the phong map. There are many other algorithms to do things like this, but this simple technique already gives a convincing result. Perspective Correct Texture Mapping: Since polygons are drawn on the screen with perspective, you can't really interpolate U and V values linearly, like I said before. A really simple example to visualize this this: Imagine a checkerboard that is close to you with one edge, but much further away with the opposite edge. It's clear that the squares that are close to you are larger than the squares that are further away. If you would have interpolated linearly, you would have gotten a weird bend in the board. This is where perspective correct texture mapping kicks in. Here's how to do it: U and V aren't linear in screenspace, as we just saw. But since we use something like 1/Z for our perspective, U/Z and V/Z ARE linear in screenspace. So, all you have to do is interpolate U/Z and V/Z over the edges of your polygon. The values that you store in the edge arrays can again be interpolated linearly over the lines that you draw. At the very last moment, you should convert back to U and V, by doing the opposite operation: (U/Z)*Z and (V/Z)*Z. This is of course an expensive operation, and you don't want to do it per pixel that you draw on the screen. The solution for that is to do it once every 8 or 16 pixels, and interpolate linearly between those values. This interpolation introduces errors again of course, but usually that's hard to see, unless you watch closely or force 'worst cases' (like standing close to a wall in Quake). If I need to spend more text on anything discussed in this document, let me know! You should have enough material again to have a good time experimenting with the techniques. |