Building a 3D Portal Engine - Issue 06 - Hidden Surface Removal by (05 February 1999) |
Return to The Archives |
Introduction
|
This is already the sixth episode in this series of articles about 3D programming in general and Portal technology in particular. Since last week was mainly focussed on beginners in the 3D scene, this week it's time for more advanced topics... time for a hybrid approach. Last week I promised to explain how to perform hidden surface removal in a 3D engine. Of course, this is a very broad issue: Invisible surface removal happens at multiple stages in the engine design. My favorite approach is the 'early out' principle (because it's the best:), meaning that we will try to get rid of as many polygons as possible, as early as possible and with the smallest amount of work. For a simple object rendering engine this only means that the image quality gets better, because your wireframe cubes become solid all of a sudden, for huge worlds it means that there is no limit on the size of a world, if you do it right. Lets start with some stuff for the wireframe dudes. For hidden surface removal, you need polygons. So you will have to advance from the vertex-and-lines stuff from last week to basic polygon stuff. That way, you can attach a plane equation to your polygons, wich can in turn be used to detect wich way a polygon is oriented. In the case of the cube, it's easy to see that at least three of the polygons are not facing the camera, sometimes only a single polygon is visible. This is of course an easy way to ditch huge amounts of polygons. It does however require the processing of each and every polygon in the object, and that introduces an interesting optimization for more advanced coders: If the entire object is 'off-screen', none of it's polygons need to be drawn. Detecting the visibility of a single object could for example be done by calculating it's bounding sphere (the smallest sphere that can hold all the vertices that make up the object), and checking this sphere against the planes that make up the view frustum. Of course, you only calculate this sphere once, or once for every frame in an object animation if neccessary. So how do you determine the plane equation for a polygon? A plane equation defines an infinite plane in 3D space. It looks like Ax+By+Cz=D, for example: 0x+0y+1z=5. If you use x and y like you normally do (x is left to right, y is top of screen to bottom of screen or vice versa), then z is the axis that goes into the screen, so it defines depth. So, if the formula says: 0x+0y+1z=5, apparently we just defined a plane parallel to the screen, at a distance of 5. This way, every plane can be constructed. To determine the plane equation for a polygon, just take two vectors on the polygon (like two edges) and determine the vector that is the normal vector for both vectors, then fill in one vertex to find 'D'. If you're stumped, here's some code (too lazy to explain it):
Back to the advanced coders. Once you have defined the four planes that make up the view frustum this way, you can very quickly check the bounding sphere of an object against it. Just calculate the distance of the centre of the sphere to the plane, and compare it to the radius of the sphere. By the way, if you have thousands of objects, even this might not be enough. Remember, the ideal engine shouldn't do anything that gets slower when the size of the world approaches 'infinite'. So, you have to link the objects that you check against the view frustum to the world somehow, since you will be doing advanced visibility checks for the world anyway. Back to the lowest level, the 'facing' test. To check whether a single polygon faces the camera, you simply calculate the dot product of the view vector and the plane equation, like this:
If 'dot' is positive, the plane is visible, otherwise, you're looking at it's backside, so it's invisible (that is, if the polygon is not doublesided). Here's another interesting point: If you want to test the view vector against the plane equations, you assume that the object is not only nicely rotated in place already, you also assume that the plane equations are calculated for the rotated object. This is of course very inefficient if you do it every frame. So, instead of using the original view vector and the rotated object, you should use the rotated view vector and the original object. That way, you also make sure that you do not throw polygons away that you just rotated and translated. Remember, early out. Rotating the view vector is tricky though. If the object is rotated 45 degrees around the x axis, you should of course rotate the view vector by the opposite to get the same effect (grab pen and paper and check this). If you construct your rotation matrix with a rotation around multiple axi plus a translation, things get tricky. To solve this nicely, you need the inverse of the rotation matrix. The inverse can be calculated very easily in the case of 3D matrices (ain't we lucky): Just transpose the 3x3 rotation part of the matrix, and fill in the negated original translation vector into this matrix. Here's some code:
For some reason, this doesn't work. Maybe a math guru can help me out here? I really suck at maths... :) The correct way to do it in focus seems to be to just negate the translation matrix, and omitting filling it in in the transposed rotation matrix. Don't know why. :) So now we have the first tools for improving the framerate of engines that draw slightly more than just a wireframe cube. Here's an overview: Some last words: There are ways to skip more polygons. For example: Two cubes are in the center of the screen, one very close to the camera and one rather far away. The closest cube fully covers the second one. Theoretically, you could test for this and omit the second cube. Or maybe you could skip some polygons that are completely obscured. Problem is, you shouldn't try too hard to get rid of polygons. Calculations to prevent this kind of overdraw quickly get more expensive than the drawing itself. Looking at Unreal, I noticed that while this is supposed to be state of the art rendering, they actually show very few polygons. Maybe they are just trying too hard. A decent accelerator can quickly draw tons of polygons, and while it's whortwhile to skip polygons that can easily be determined to be invisible, it's also easy to remove a polygon using an algorithm that takes more time than the actual drawing. This introduces another problem: Since software rendering is much slower than hardware rendering, the break-even point is quite different in both cases. Since it's already hard enough to build a full-blown 3D engine, most coders don't bother writing two. And even if you decide to move fully to hardware accelerated 3D, you encounter the same problem: Every card behaves differently and benefits from different approaches. That's one of the reasons why I like software rendering so much: It will behave the same, more or less regardless of the configuration. On my PII/400 with Riva accelerator, Focus' software rendering beats the hardware rendering image quality easily, although it's slower. But probably I'll someday switch to hardware rendering too. That's also why I implemented an s-buffer in Focus; it is supposed to make the software rendering look more like hardware rendering by making overlapping polygons very inexpensive. A VooDoo doesn't really care about overlapping polygons either; usually determining visible polygons is a more time consuming job. With the s-buffer, it all scales nicely again: While software rendering is still a lot slower than hardware rendering, it doesn't die when the polygons overlap, and it gets closer to hardware rendering when the number of polygons increases. Sorry for the ranting. :) More cool stuff next week. |